An accurate vertebrae segmentation in the spine is an essential pre-requisite in many applications of image-based spine assessment, surgical planning, clinical diagnostic treatment, and biomechanical modeling. In this paper, we present the stacked sparse autoencoder (SSAE) model for the segmentation of vertebrae from CT images. After the preprocessing step, we extracted overlapped patches from the vertebrae CT images as the inputs of our proposed model. The SSAE model was trained in an unsupervised way to learn high-level features from the input pixels of the unlabeled images patch. To improve the discriminability of the learned features, we further refined the feature representation in a supervised fashion and fine-tuned the whole model by using the feedforward neural network parameters for classifying the overlapped patches. We then validated our model on a publicly available MICCAI CSI2014 dataset and found that our model outperforms the other state-of-the-art methods.
Over the last few years, major breakthroughs were achieved in the application of deep learning in many computer vision tasks, such as image classification and segmentation. The automatic liver segmentation from CT images has become an important area in clinical research, including radiotherapy, liver volume measurement, and liver transplant surgery. This paper proposes a novel convolutional neural network for liver segmentation (CNN-LivSeg) algorithm that involves three convolutional (each convolutional layer followed by max-pooling layer) and two fully connected layers with a final 2- way softmax is used for liver discrimination. The weight initialization is based on a random Gaussian, which performed a distance preserving-embedding of the data. To avoid using the fully 3D CNN network which is computationally expensive and time-consuming, 2D patches were extracted and processed for segmentation. Experiments were performed on MICCAI-SLiver07 as a benchmark dataset. The mean ratios of Dice similarity coefficient, Jaccard similarity index, accuracy, specificity, and sensitivity were 0.9541, 0.9122, 0.9725, 0.9904, and 0.9652, respectively, thereby suggesting that the proposed method performed well on the test images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.