PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8676 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital pathology is developing based on the improvement and popularization of WSI (whole slide imaging) scanners. WSI scanners are widely expected to be used as the next generation microscope for diagnosis; however, their usage is currently mostly limited to education and archiving. Indeed, there are still many hindrances in using WSI scanners for diagnosis (not research purpose), two of the main reasons being the perceived high cost and small gain in productivity obtained by switching from the microscope to a WSI system and the lack of WSI standardization. We believe that a key factor for advancing digital pathology is the creation of computer assisted diagnosis systems (CAD). Such systems require high-resolution digitization of slides and provide a clear added value to the often costly conversion to WSI. We (NEC Corporation) are creating a CAD system, named e-Pathologist ®. This system is currently used at independent pathology labs for quality control (QC/QA), double-checking pathologists diagnosis and preventing missed cancers. At the end of 2012, about 80,000 slides, 200,000 tissues of gastric and colorectal samples will have been analyzed by e-Pathologist ®. Through the development of e-Pathologist ®, it has become clear that a computer program should be inspired by the pathologist diagnosis process, yet it should not be a mere copy or simulation of it. Indeed pathologists often approach the diagnosis of slides in a "holistic" manner, examining them at various magnifications, panning and zooming in a seemingly haphazard way that they often have a hard time to precisely describe. Hence there has been no clear recipe emerging from numerous interviews with pathologists on how to exactly computer code a diagnosis expert system. Instead, we focused on extracting a small set of histopathological features that were consistently indicated as important by the pathologists and then let the computer figure out how to interpret in a quantitative way the presence or absence of these features over the entire slide. Using the overall pathologists diagnosis (into a class of disease), we train the computer system using advanced machine learning techniques to predict the disease based on the extracted features. By considering the diagnosis of several expert pathologists during the training phase, we insure that the machine is learning a "gold standard" that will be applied consistently and objectively for all subsequent diagnosis, making them more predictable and reliable. Considering the future of digital pathology, it is essential for a CAD system to produce effective and accurate clinical data. To this effect, there remain many hurdles, including standardization as well as more research into seeking clinical evidences from "computer-friendly" objective measurements of histological images. Currently the most commonly used staining method is H&E (Hematoxylin and Eosin), but it is extremely difficult to standardize the H&E staining process. Current pathology criteria, category, definitions, and thresholds are all on based pathologists subjective observations. Digital pathology is an emerging field and researchers should bear responsibility not only for developing new algorithms, but also for understanding the meaning of measured quantitative data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we introduce a tensor-based computation and modeling framework for the analysis of digital pathology images at different resolutions. We represent digital pathology images as a third-order tensor (a three-way array) with modes: images, features and scales, by extracting features at different scales. The constructed tensor is then analyzed using the most popular tensor factorization methods, i.e., CANDECOMP/PARAFAC and Tucker. These tensor models enable us to extract the underlying patterns in each mode (i.e. images, features and scales) and examine how these patterns are related to each other. As a motivating example, we analyzed 500 follicular lymphoma images corresponding to high power fields, evaluated by three expert hematopathologists. Numerical experiments demonstrate that (i) tensor models capture easily-interpretable patterns showing the significant features and scales, and (ii) patterns extracted by the right tensor model, which in this case is the Tucker model commonly used for exploratory analysis of higher-order tensors, perform as well as the reduced dimensions captured by matrix factorization methods on unfolded data, in terms of follicular lymphoma grading.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a novel radiohistomorphometric method for quantitative correlation and subsequent discovery of imaging markers for aggressive prostate cancer (CaP). While this approach can be employed in the context any imaging modality and disease domain, we seek to identify quantitative dynamic contrast enhanced (DCE) magnetic resonance imaging (MRI) attributes that are highly correlated with density and architecture of tumor microvessels, surrogate markers of CaP aggressiveness. This retrospective study consisted of five Gleason score matched patients who underwent 3 Tesla multiparametric MRI prior to radical prostatectomy (RP). The excised gland was sectioned and quartered with a rotary knife. For each serial section, digitized images of individual quadrants were reconstructed into pseudo whole mount sections via previously developed stitching program. The individual quadrants were stained with vascular marker CD31 and annotated for CaP by an expert pathologist. The stained microvessel regions were quantitatively characterized in terms of density and architectural arrangement via graph algorithms, yielding a series of quantitative histomorphometric features. The reconstructed pseudo whole mount histologic sections were non-linearly co-registered with DCE MRI to identify tumor extent on MRI on a voxel-by-voxel basis. Pairwise correlations between kinetic and microvessel features within CaP annotated regions on the two modalities were computed to identify highly correlated attributes. Preliminary results of the radiohistomorphometric correlation identified 8 DCE MRI kinetic features that were highly and significantly (p<0.05) correlated with a number of microvessel parameters. Most of the identified imaging features were related to rate of washout (Rwo) and initial area under the curve (IAUC). Association of those attributes with Gleason patterns showed that the identified imaging features clustered most of the tumors with primary Gleason pattern of 3 together. These results suggest that Rwo and IAUC may be promising candidate imaging markers for identification of aggressive CaP in vivo.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a system that detects cancer on slides of gastric tissue sections stained with hematoxylin and eosin (H&E). At its heart is a classifier trained using the semi-supervised multi-instance learning framework (MIL) where each tissue is represented by a set of regions-of-interest (ROI) and a single label. Such labels are readily obtained because pathologists diagnose each tissue independently as part of the normal clinical workflow. From a large dataset of over 26K gastric tissue sections from over 12K patients obtained from a clinical load spanning several months, we train a MIL classifier on a patient-level partition of the dataset (2/3 of the patients) and obtain a very high performance of 96% (AUC), tested on the remaining 1/3 never-seen before patients (over 8K tissues). We show this level of performance to match the more costly supervised approach where individual ROIs need to be labeled manually. The large amount of data used to train this system gives us confidence in its robustness and that it can be safely used in a clinical setting. We demonstrate how it can improve the clinical workflow when used for pre-screening or quality control. For pre-screening, the system can diagnose 47% of the tissues with a very low likelihood (< 1%) of missing cancers, thus halving the clinicians' caseload. For quality control, compared to random rechecking of 33% of the cases, the system achieves a three-fold increase in the likelihood of catching cancers missed by pathologists. The system is currently in regular use at independent pathology labs in Japan where it is used to double-check clinician's diagnoses. At the end of 2012 it will have analyzed over 80,000 slides of gastric and colorectal samples (200,000 tissues).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Studying inflammatory cell subsets in transplant biopsies can be important for diagnosis and to understand pathogenesis. Counting the different subsets of lymphocytes and macrophages in the immunostained renal biopsy is often considered as the only way to characterize the inflammatory infiltrate. Counting each subset of cells in each biopsy under a light microscope can be extremely tedious, time consuming and subject to inter- and intra-personal variability. This paper presents a new method to automatically count the number of CD8 positive cytotoxic t-cells on scanned images of immunostained slides of renal allograft biopsies. The method uses normalized multi-scale difference of Gaussian to detect the potential cytotoxic t-cell candidates regions, both in the color channel and the intensity channel. Then, it fuses the information from both channels’ candidate regions to detect the individual cells within cell clumps. The evaluation of the proposed method shows that there is a strong consensus between the proposed method’s markings with the pathologist’s markings (94.4%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The scoring of mitotic figures is an integrated part of the Bloom and Richardson system for grading of invasive breast cancer. It is routinely done by pathologists by visual examination of hematoxylin and eosin (H&E) stained histology slides on a standard light microscope. As such, it is a tedious process prone to inter- and intra-observer variability. In the last decade, whole-slide imaging (WSI) has emerged as the “digital age” alternative to the classical microscope. The increasing acceptance of WSI in pathology labs has brought an interest in the application of automatic image analysis methods, with the goal of reducing or completely eliminating manual input to the analysis. In this paper, we present a method for automatic detection of mitotic figures in breast cancer histopathology images. The proposed method consists of two main components: candidate extraction and candidate classification. Candidate objects are extracted by image segmentation with the Chan-Vese level set method. The candidate classification component aims at classifying all extracted candidates as being a mitotic figure or a false object. A statistical classifier is trained with a number of features that describe the size, shape, color and texture of the candidate objects. The proposed detection procedure was developed using a set of 18 whole-slide images, with over 900 manually annotated mitotic figures, split into independent training and testing sets. The overall true positive rate on the testing set was 59.5% while achieving 4.2 false positives per one high power field (HPF).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a set of novel structural texture features for quantifying nuclear chromatin patterns in cells on a conventional Pap smear. The features are derived from an initial segmentation of the chromatin into bloblike texture primitives. The results of a comprehensive feature selection experiment, including the set of proposed structural texture features and a range of different cytology features drawn from the literature, show that two of the four top ranking features are structural texture features. They also show that a combination of structural and conventional features yields a classification performance of 0.954±0.019 (AUC±SE) for the discrimination of normal (NILM) and abnormal (LSIL and HSIL) slides. The results of a second classification experiment, using only normal-appearing cells from both normal and abnormal slides, demonstrates that a single structural texture feature measuring chromatin margination yields a classification performance of 0.815±0.019. Overall the results demonstrate the efficacy of the proposed structural approach and that it is possible to detect malignancy associated changes (MACs) in Papanicoloau stain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Clinical Oncological imaging is performed with various modalities, CT, MRI and F-18-FDG-PET. Recently,
investigators have used diffusion-advective-reaction tumor-growth models for registration to brain-atlas for MRI braintumor
datasets. We wish to extract model parameters from clinical time series scans of tumors (e.g. CT or MRI of brain
or lung tumors) to see if some of the parameters, tumor growth rate and/or diffusion-coefficient, could potentially serve
as predictive markers for monitoring disease and treatment response. We can then correlate with disease history and/or
PET SUV to assess the viability of the model parameters as markers. One hurdle to performing this is that for majority
of patients only 1 or 2 scans would be available for a specific tumor. We first take an existing diffusion-advectionreaction
dynamic tumor growth model and generate series of synthetic tumors. Then we try to invert the model and
recover the coefficient for one or multiple target scans, minimizing the sum-squared-error using APPSPACK. We find
that for this idealized case we could recover the diffusion-coefficient and growth-rate parameters with ~2%-3% error
whether we used the entire time series or a single time point. However, in general, for either case (multiple or single time
scan) some additional parameters such the tumor time scan and starting locations maybe necessary in the minimization.
In a second (novel) approach, we hypothesize that over a short time scale the tumor density/volume change is small (or
undetectable). Thus the cell birth and death and diffusion terms are in near-equilibrium. This steady-state model
diffusion-coefficient and growth parameters may then be extracted from even a single CT or MRI scan available for each
patient. Our steady state forward simulation and inversion could recover steady-state diffusion and growth-rate
parameters with near-perfect (~0.6% ) error for no-noise case and ~(0,7%) error for a high-noise case. The steady-state
model fitted excellently to a lung-tumor (1-d) profile of an (anonymized) patient with only 1.74% fitting error in a sumsquared
sense.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging may enable the determination of the spatial distribution and aggressiveness of prostate cancer in vivo before treatment, possibly supporting diagnosis, therapy selection, and focal therapy guidance. 3D reconstruction of prostate histology facilitates the validation of such imaging applications. We evaluated four histology–ex vivo magnetic resonance (MR) image 3D reconstruction algorithms comprising two similarity metrics (mutual information MMI or fiducial registration error MFRE) and two search domains (affine transformations TA or fiducial-constrained affine transformations TF). Seven radical prostatectomy specimens were imaged with MR imaging, processed for whole-mount histology, and digitized as histology images. The algorithms were evaluated on the reconstruction error and the sensitivity of same to translational and rotational errors in initialization. Reconstruction error was quantified as the target registration error (TRE): the post-reconstruction distance between homologous point landmarks (7–15 per histology section; 132 total) identified on histology and MR images. Sensitivity to initialization was quantified using a linear model relating TRE to varied levels of translational/rotational initialization errors. The algorithm using MMI and TA yielded a mean TRE of 1.2±0.7 mm when initialized using an approach that assumes histology corresponds to the front faces of tissue blocks, but was sensitive to initialization error. The algorithm using MFRE and TA yielded a mean TRE of 0.8±0.4 mm with minimal sensitivity to initialization errors. Compared to the method used to initialize the algorithms (mean TRE 1.4±0.7 mm), a study using an algorithm with a mean TRE of 0.8 mm would require 27% fewer subjects for certain imaging validation study designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large scale in vitro cell growth experiments require automated segmentation and tracking methods to construct cell lineages in order to aid cell biologists in further analysis. Flexible segmentation algorithms that easily adapt to the specific type of problem at hand and directly applicable tracking methods are fundamental building blocks of setting up multi purpose pipelines. Segmentation by discriminative dictionary learning and a graph formulated tracking method constraining the allowed topology changes are combined here to accommodate for highly irregular cell shapes and movement patterns. A mitosis detector constructed from empirical observations of cells in a pre-mitotic state interacts with the graph formulation to dynamically allow for cell mitosis when appropriate. Track consistency is ensured by introducing pragmatic constraints and the notion of blob states. We validate the proposed pipeline by tracking pig neural progenitor cells through a time lapse experiment consisting of 825 images collected over 69 hours. Each step of the tracking pipeline is validated separately by comparison with manual annotations. The number of tracked cells increase from approximately 350 to 650 during the time period.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Free form deformation (FFD) is a popular algorithm for non-linear image registration because of its ability to accurately recover deformations. However, due to the unconstrained nature of elastic registration, FFD may introduce unrealistic deformations, especially when differences between template and target image are large, thereby necessitating a regularizer to constrain the registration to a physically meaning transformation. Prior knowledge in the form of a Statistical Deformation Model (SDM) in a registration scheme has been shown to function as an effective regularizer. With a similar underlying premise, in this paper, we present a novel regularizer for FFD that leverages knowledge of known, valid deformations to train a statistical deformation model (SDM). At each iteration of the FFD registration, the SDM is utilized to calculate the likelihood of a given deformation occurring and appropriately influence the similarity metric to limit the registration to only realistic deformations. We quantitatively evaluate robustness of the SDM regularizer in the framework of FFD through a set of synthetic experiments using brain images with a range of induced deformations and 3 types of multiplicative noise - Gaussian, salt and pepper and speckle. We demonstrate that FFD with the inclusion of the SDM regularizer yields up to a 19% increase in normalized cross correlation (NCC) and a 16% decrease in root mean squared (RMS) error and mean absolute distance (MAD). Registration performance was also evaluated qualitatively and quantitatively in spatially aligning ex vivo pseudo whole mount histology (WMH) sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer (CaP) onto corresponding radiologic imaging. Across all evaluation measures (MAD, RMS, and DICE), regularized FFD performed significantly better compared to unregularized FFD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image-based analysis of the vascular structures of murine liver samples is an important tool for scientists to understand liver physiology and morphology. Typical assessment methods are MicroCT, which allows for acquiring
images of the whole organ while lacking resolution for fine details, and confocal laser scanning microscopy, which
allows detailed insights into fine structures while lacking the broader context. Imaging of histological serial whole slide sections is a recent technology able to fill this gap, since it provides a fine resolution up to the cellular level, but on a whole organ scale. However, whole slide imaging is a modality providing only 2D images. Therefore the challenge is to use stacks of serial sections from which to reconstruct the 3D vessel structures. In this paper we present a semi-automatic procedure to achieve this goal. We employ an automatic method that detects vessel structures based on continuity and shape characteristics. Furthermore it supports the user to perform manual corrections where required. With our methods we were able to successfully extract and reconstruct vessel structures from a stack of 100 and a stack of 397 serial sections of a mouse liver lobe, thus proving the potential of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Registration of histological slices to volumetric imaging of the prostate is an important task that can be used to optimize imaging for cancer detection. Such registration is challenging due to change in volume of the specimen during fixation, and misalignment of the histological slices during preparation and digital scanning. In this work we propose a multiple-slice to volume registration method in which a stack of equispaced, uniaxial but unaligned 2D contours, extracted from digitally scanned whole-mount histological slices, is registered to a 3D surface, extracted from a volumetric image of the prostate. Initially, the stack of unaligned contours is coarsely aligned to the surface as a whole. Then, each contour is finely registered to the surface while being confined to its plane along the sectioning axis. We incorporate the method in a particle filtering framework to compensate for the high dimensionality of the search space and multi-modal nature of the problem. Moreover, such framework allows modeling the uncertainty in the segmentation of the contours and surface, in order to derive optimal registration parameters in a Bayesian approach. The proposed algorithm is demonstrated and evaluated on both synthetic and clinical data. The mean area overlap of the registered gland and the segmented histology was found to be 90.2%, with a mean registration error of 1.8mm between visible landmarks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate pathology assessment of post-prostatectomy specimens is important to determine the need for and to guide
potentially life-saving adjuvant therapy. Digital pathology imaging is enabling a transition to a more objective quantification of some surgical pathology assessments, such as tumour volume, that are currently visually estimated by
pathologists and subject to inter-observer variability. One challenge for tumour volume quantification is the traditional 3–5 mm spacing of images acquired from sections of radical prostatectomy specimens. Tumour volume estimates may benefit from a well-motivated approach to inter-slide tumour boundary interpolation. We implemented and tested a level set-based interpolation method and found that it produced 3D tumour surfaces that may be more biologically plausible than those produced via a simpler nearest-slide interpolation. We found that the simpler method produced larger tumour volumes, compared to the level set method, by a median factor of 2.3. For contexts where only tumour volume is of interest, we determined that the volumes produced via the simpler method can be linearly adjusted to the level setproduced volumes. The smoother surfaces from level set interpolation yielded measurable differences in tumour boundary location; this may be important in several clinical/research contexts (e.g. pathology-based imaging validation for focal therapy planning).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of tools for the processing of color images is often complicated by nonstandardness – the notion that different image regions corresponding to the same tissue will occupy different ranges in the color spectrum. In digital pathology (DP), these issues are often caused by variations in slide thickness, staining, scanning parameters, and illumination. Nonstandardness can be addressed via standardization, a pre-processing step that aims to improve color constancy by realigning color distributions of images to match that of a predefined template image. Unlike color normalization methods, which aim to scale (usually linearly or assuming that the transfer function of the system is known) the intensity of individual images, standardization is employed to align distributions in broad tissue classes (e.g. epithelium, stroma) across different DP images irrespective of institution, protocol, or scanner. Intensity standardization has previously been used for addressing the issue of intensity drift in MRI images, where similar tissue regions have different image intensities across scanners and patients. However, this approach is a global standardization (GS) method that aligns histograms of entire images at once. By contrast, histopathological imagery is complicated by the (a) additional information present in color images and (b) heterogeneity of tissue composition. In this paper, we present a novel color Expectation Maximization (EM) based standardization (EMS) scheme to decompose histological images into independent tissue classes (e.g. nuclei, epithelium, stroma, lumen) via the EM algorithm and align the color distributions for each class independently. Experiments are performed on prostate and oropharyngeal histopathology tissues from 19 and 26 patients, respectively. Evaluation methods include (a) a segmentation-based assessment of color consistency in which normalized median intensity (NMI) is calculated from segmented regions across a dataset and (b) a quantitative measure of histogram alignment via mean landmark distance. EMS produces lower NMI standard deviations (i.e. greater consistency) of 0.0054 and 0.0034 for prostate and oropharyngeal cohorts, respectively, than unstandardized (0.034 and 0.026) and GS (0.031 and 0.017) approaches. Similarly, we see decreased mean landmark distance for EMS (2.25 and 4.20) compared to both unstandardized (54.8 and 27.3) and GS (27.1 and 8.8) images. These results suggest that the separation of broad tissue classes in EMS is vital
to the standardization of DP imagery and subsequent development of computerized image analysis tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prostate cancer (CaP) is evidenced by profound changes in the spatial distribution of cells. Spatial arrangement and architectural organization of nuclei, especially clustering of the cells, within CaP histopathology is known to be predictive of disease aggressiveness and potentially patient outcome. Quantitative histomorphometry is a relatively new field which attempt to develop and apply novel advanced computerized image analysis and feature extraction methods for the quantitative characterization of tumor morphology on digitized pathology slides. Recently, graph theory has been used to characterize the spatial arrangement of these cells by constructing a graph with cell/nuclei as the node. One disadvantage of several extant graph based algorithms (Voronoi, Delaunay, Minimum Spanning Tree) is that they do not allow for extraction of local spatial attributes from complex networks such as those that emerges from large histopathology images with potentially thousands of nuclei. In this paper, we define a cluster of cells as a node and construct a novel graph called Cell Cluster Graph (CCG) to characterize local spatial architecture. CCG is constructed by first identifying the cell clusters to use as nodes for the construction of the graph. Pairwise spatial relationship between nodes is translated into edges of the CCG, each of which are assigned certain probability, i.e. each edge between any pair of a nodes has a certain probability to exist. Spatial constraints are employed to deconstruct the entire graph into subgraphs and we then extract global and local graph based features from the CCG. We evaluated the ability of the CCG to predict 5 year biochemical failures in men with CaP and who had previously undergone radical prostatectomy. Extracted features from CCG constructed using nuclei as nodal centers on tissue microarray (TMA) images obtained from the surgical specimens of 80 patients allowed us to train a support vector machine classifier via a 3 fold randomized cross validation procedure which yielded a classification accuracy of 83:1±1:2%. By contrast the Voronoi, Delaunay, and Minimum spanning tree based graph classifiers yielded corresponding classification accuracies of 67:1±1:8% and 60:7±0:9% respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
M. Khalid Khan Niazi, Michael Pennell, Camille Elkins, Jessica Hemminger, Ming Jin, Sean Kirby, Habibe Kurt, Barrie Miller, Elizabeth Plocharczyk, et al.
Presence of Ki-67, a nuclear protein, is typically used to measure cell proliferation. The quantification of the Ki-67 proliferation index is performed visually by the pathologist; however, this is subject to inter- and intra-reader variability. Automated techniques utilizing digital image analysis by computers have emerged. The large variations in specimen preparation, staining, and imaging as well as true biological heterogeneity of tumor tissue often results in variable intensities in Ki-67 stained images. These variations affect the performance of currently developed methods. To optimize the segmentation of Ki-67 stained cells, one should define a data dependent transformation that will account for these color variations instead of defining a fixed linear transformation to separate different hues. To address these issues in images of tissue stained with Ki-67, we propose a methodology that exploits the intrinsic properties of CIE L∗a∗b∗ color space to translate this complex problem into an automatic entropy based thresholding problem. The developed method was evaluated through two reader studies with pathology residents and expert hematopathologists. Agreement between the proposed method and the expert pathologists was good (CCC = 0.80).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the effects of common types of image manipulation and image degradation on the perceived image quality (IQ) of digital pathology slides. The reference images in our study were digital images of animal pathology samples (gastric fundic glands of a dog and liver of a foal) stained with haematoxylin and eosin. The following 5 types of artificial manipulations were applied to the images, each very subtle (though visually discernible) and always one at a time: blurring, gamma modification, adding noise, change in color saturation, and JPG compression. Three groups of subjects: pathology experts (PE), pathology students (PS) and imaging experts (IE), assessed 6 IQ attributes in 72 single-stimulus trials. The following perceptual IQ attribute ratings were collected: overall IQ, blur disturbance, quality of contrast, noise disturbance, and quality of color saturation. Our results indicate that IQ ratings vary quite significantly with expertise, especially, PE and IE tend to judge IQ according to different criteria. In particular, IE seem notably more sensitive to noise than PE who, on the other side, tend to be sensitive to manipulations in color and gamma parameters. It remains an important question for future research to examine the impact of IQ on the diagnostic performance of PE. That should support our present findings in suggesting directions for further development of the numerical IQ metrics for digital pathology data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
User experience with viewing images in pathology is crucial for accurate interpretation and diagnosis. With digital
pathology, images are being read on a display system, and this poses new types of questions: such as what is the
difference in terms of pixelation, refresh lag or obscured features compared to an optical microscope. Is there a resultant change in user performance in terms of speed of slide review, perception of adequacy and quality or in diagnostic confidence? A prior psychophysical study was carried out comparing various display modalities on whole slide imaging (WSI) in pathology at the University of Pittsburgh Medical Center (UPMC) in the USA. This prior study compared professional and non-professional grade display modalities and highlighted the importance of using a medical grade display to view pathological digital images. This study was duplicated in Europe at the Department of Pathology in Erasme Hospital (Université Libre de Bruxelles (ULB)) in an attempt to corroborate these findings. Digital WSI with corresponding glass slides of 58 cases including surgical pathology and cytopathology slides of varying difficulty were employed. Similar non-professional and professional grade display modalities were compared to an optical microscope (Olympus BX51). Displays ranged from a laptop (DELL Latitude D620), to a consumer grade display (DELL E248WFPb), to two professional grade monitors (Eizo CG245W and Barco MDCC-6130). Three pathologists were
selected from the Department of Pathology in Erasme Hospital (ULB) in Belgium to view and interpret the pathological
images on these different displays. The results show that non-professional grade displays (laptop and consumer) have
inferior user experience compared to professional grade monitors and the optical microscope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this work was to evaluate a newly developed content-based retrieval approach for characterizing a range
of different white blood cells from a database of imaged peripheral blood smears. Specimens were imaged using a 20×
magnification to provide adequate resolution and sufficiently large field of view. The resulting database included a test
ensemble of 96 images (1000×1000 pixels each). In this work, we propose a four-step content-based retrieval method
and evaluate its performance. The content-based image retrieval (CBIR) method starts from white blood cell
identification, followed by three sequential steps including coarse-searching, refined searching, and finally mean-shift
clustering using a hierarchical annular histogram (HAH). The prototype system was shown to reliably retrieve those
candidate images exhibiting the highest-ranked (most similar) characteristics to the query. The results presented here
show that the algorithm was able to parse out subtle staining differences and spatial patterns and distributions for the
entire range of white blood cells under study. Central to the design of the system is that it capitalizes on lessons learned
by our team while observing human experts when they are asked to carry out these same tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in information technology have improved pathological virtual-slide technology and diagnostic support system studies of pathological images. Diagnostic support systems utilize quantitative indices determined by image processing. In previous studies on diagnostic support systems, carcinomatous areas of breast or lung have been
recognized by the feature quantities of nuclear sizes, complexities, and internuclear distances based on graph theory,
among other features. Improving recognition accuracy is important for the addition of new feature quantities. We
focused on hepatocellular carcinoma (HCC) and investigated new feature quantities of histological images of HCC. One of the most important histological features of HCC is the trabecular pattern. For diagnosing cancer, it is important to recognize the tumor cell trabeculae. We propose a new algorithm for calculating the number of cell layers in histological images of HCC in tissue sections stained by hematoxylin and eosin. For the calculation, we used a Delaunay diagram that was based on the median points of nuclei, deleted the sinusoid and fat droplet regions from the Delaunay diagram, and counted the Delaunay lines while applying a thinning algorithm. Moreover, we experimented with the calculation of the number of cell layers with our method for different histological grades of HCC. The number of cell layers discriminated tumor differentiations and Edmondson grades; therefore, our algorithm may serve as an index of HCC for diagnostic support systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the complex branching system of the breast, terminal duct lobular units (TDLUs) are the anatomical
location where most cancer originates. With aging, TDLUs undergo physiological involution, re
ected in a
loss of structural components (acini) and a reduction in total number. Data suggest that women undergoing
benign breast biopsies that do not show age appropriate involution are at increased risk of developing breast
cancer. To date, TDLU assessments have generally been made by qualitative visual assessment, rather by
objective quantitative analysis. This paper introduces a technique to automatically estimate a set of quantitative
measurements and use those variables to more objectively describe and classify TDLUs. To validate the accuracy
of our system, we compared the computer-based morphological properties of 51 TDLUs in breast tissues donated
for research by volunteers in the Susan G. Komen Tissue Bank and compared results to those of a pathologist,
demonstrating 70% agreement. Secondly, in order to show that our method is applicable to a wider range
of datasets, we analyzed 52 TDLUs from biopsies performed for clinical indications in the National Cancer
Institute Breast Radiology and Study of Tissues (BREAST) STAMP project and obtained 82% correlation with
visual assessment. Lastly, we demonstrate the ability to uncover novel measures when researching the structural
properties of the acini by applying machine learning and clustering techniques. Through our study we found that
while the number of acini per TDLU increase exponentially with the TDLU diameter, the average elongation
and roundness remain constant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method for assessing color reproducibility of whole-slide imaging (WSI) systems is introduced. A color phantom
is used to evaluate the difference between the input to and the output from a WSI system. The method consists of four components: (a) producing the color phantom, (b) establishing the truth of the color phantom, (c) retrieving the digital display data from the WSI system, and (d) calculating the color difference. The method was applied to a WSI system and used to evaluate the color characteristics with and without color management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A significant recent breakthrough in medical imaging is the development of a new non-invasive modality based on
multispectral and hyperspectral imaging that can be easily integrated in the operating room. This technology consists of collecting series of images at wavelength intervals of only few nanometers and in which single pixels have spectral
information content relevant to the scene under observation. Before becoming of practical interest for the clinician, such system should meet important requirements. Firstly, it should enable real reflectance measurements and high quality images to dispose of valuable physical data after spatial and spectral calibration. Secondly, quick band pass scanning and a smart interface are needed for intra-operative mode. Finally, experimentation is required to develop expert knowledge for hyperspectral image interpretation and result display on RGB screens, to assist the surgeon with tissue detection and diagnostic capabilities during an intervention. This paper is focused mainly on the two first specifications of this methodology applied to a liquid crystal tunable filter (LCTF) based visible and near infrared spectral imaging system. The system consists of an illumination unit and a spectral imager that includes a monochrome camera, two LCTFs and a fixed focal lens. It also involves a computer with the data acquisition software. The system can capture hyperspectral images in the spectral range of 400 – 1100 nm. Results of preclinical experiments indicated that anatomical tissues can be distinguished especially in near infrared bands. This promises a great capability of hyperspectral imaging to bring efficient assistance for surgeons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an adaptive image representation learning method for cervix cancer tumor detection. The method learns the representation in two stages, a local feature description using a sparse dictionary learning and a global image representation using a bag-of-features (BOF) approach. The resultant representation is thus a BOF histogram, learned from a sparse local patch representation. The parameters of the sparse learning representation algorithm are tuned up by searching dictionaries with low coherence and high sparsity. The proposed method was evaluated in a dataset of 394 cervical histology images with tumoral and non-tumoral pathologies acquired at a 10X magnification and a resolution of 3800 × 3000 pixels in RGB color. A conventional BOF image representation, using a linearized raw-block patch descriptor, was selected as the baseline. The preliminary results show that our proposed method improves the baseline for all different BOF dictionary sizes (125, 250, 500, 1000 and 2000). Under a 10 cross-validation test and a 2000 BOF dictionary, the best performance was 0:77±0:04 in average accuracy, improving in 2:5% the baseline. These results suggest that a learning-from-data approach could be used in different stages of an image classifier construction pipeline, in particular for the image representation stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The visibility and continuity of the inner segment outer segment (ISOS) junction layer of the photoreceptors on spectral domain optical coherence tomography images is known to be related to visual acuity in patients with age-related macular degeneration (AMD). Automatic detection and segmentation of lesions and pathologies in retinal images is crucial for the screening, diagnosis, and follow-up of patients with retinal diseases. One of the challenges of using the classical level-set algorithms for segmentation involves the placement of the initial contour. Manually defining the contour or randomly placing it in the image may lead to segmentation of erroneous structures. It is important to be able to automatically define the contour by using information provided by image features. We explored a level-set method which is based on the classical Chan-Vese model and which utilizes image feature information for automatic contour placement for the segmentation of pathologies in fluorescein angiograms and en face retinal images of the ISOS layer. This was accomplished by exploiting a priori knowledge of the shape and intensity distribution allowing the use of projection profiles to detect the presence of pathologies that are characterized by intensity differences with surrounding areas in retinal images. We first tested our method by applying it to fluorescein angiograms. We then applied our method to en face retinal images of patients with AMD. The experimental results included demonstrate that the proposed method provided a quick and improved outcome as compared to the classical Chan-Vese method in which the initial contour is randomly placed, thus indicating the potential to provide a more accurate and detailed view of changes in pathologies due to disease progression and treatment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The majority of meningiomas belong to one of four subtypes: fibroblastic, meningothelial, transitional and psammomatous. Classification of histopathology images of these meningioma is a time consuming and error prone task, and as such automatic methods aim to help reduce time spent and errors made. This work is concerned with classifying histopathology images into the above subtypes by extracting simple morphology features to represent each image subtype. Morphology features are identified based on the pathology of the meningioma subtypes and are used to classify each image into one of the four WHO Grade I subtypes. The morphology features correspond to visual changes in the appearance of cells, and the presence of psammoma bodies. Using morphological image processing these features can be extracted and the presence of each detected feature is used to build a vector for each meningioma image. These feature vectors are then classified using a Random Forest based classifier. A set of 80 images was used for experimentation with each subtype being represented by 20 images, and a ten-fold cross validation approach was used to obtain an overall classification accuracy. Using the above methodology a maximum classification accuracy of 91:25% is achieved across the four subtypes with coherent misclassification (e.g. no misclassification between fibroblastic and meningothelial). This work demonstrates that morphology features can be used to perform meningioma subtype classification and provide an understandable link between the features identified in the images and the classification results obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A potential biomarker for early diagnosis of cancer is assessment of high nuclear DNA content. Conventional
hematoxylin staining is neither stoichiometric nor reproducible. Although feulgen stain is stoichiometric, it is time
consuming and destroys nuclear morphology. We used acidic thionin stain, which can be stoichiometric and also
preserve the nuclear morphology used in conventional cytology. Fifty chicken erythrocyte nuclei singlets (CENs), diploid trout erythrocyte nuclei (TENs) and Triploid TENs were stained for 15 and 30 minutes each. After imaging with optical projection tomography microscope (OPTM), 3D reconstructions of the nuclei were processed to calculate
chromatin content. The mean of ratios of individual observations was compared with standard ratios of DNA indices
of the flow cytometry standards. Mean error, standard deviation and 97% confidence interval (CI) was computed for
the ratios of these standards. At 15 and 30 minutes, the ratio of Triploid TEN to TEN was 1.72 and 1.76, TEN to CEN
was 1.27 and 2.01 and Triploid TEN to CEN was 2.11 and 3.39 respectively. Estimates of DNA indices for all 3 types
of nuclei had less mean error at 30 minutes of staining; Triploid TEN to TEN 0.349±0.04, TEN to CEN 0.36±0.04 and Triploid TEN to CEN 0.64 ± 0.07. In conclusion, imaging of cells with thionin staining at 30 minutes and 3D reconstruction provides quantitative assessment of cell chromatin content. The addition of this quantitative feature of aneuploidy is expected to add greater accuracy to a classifier for early diagnosis of cancer based on 3D cytological imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Understanding the spatial distribution of prostate cancer and how it changes according to prostate specific antigen (PSA) values, Gleason score, and other clinical parameters may help comprehend the disease and increase the overall success rate of biopsies. This work aims to build 3D spatial distributions of prostate cancer and examine the extent and location of cancer as a function of independent clinical parameters. The border of the gland and cancerous regions from wholemount histopathological images are used to reconstruct 3D models showing the localization of tumor. This process utilizes color segmentation and interpolation based on mathematical morphological distance. 58 glands are deformed into one prostate atlas using a combination of rigid, affine, and b-spline deformable registration techniques. Spatial distribution is developed by counting the number of occurrences in a given position in 3D space from each registered prostate cancer. Finally a difference between proportions is used to compare different spatial distributions. Results show that prostate cancer has a significant difference (SD) in the right zone of the prostate between populations with PSA greater and less than 5ng/ml. Age does not have any impact in the spatial distribution of the disease. Positive and negative capsule-penetrated cases show a SD in the right posterior zone. There is SD in almost all the glands between cases with tumors larger and smaller than 10% of the whole prostate. A larger database is needed to improve the statistical validity of the test. Finally, information from whole-mount histopathological images may provide better insight into prostate cancer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection of suspicious cancerous regions is still a problematic task in histopathology, where complex qualitative, and highly subjective, analyses are required by experts. Digital pathology is the option for building semi-automated tools that could assist pathologists in carrying out their analysis in a quantitative way. Methods for assisted detection of cancerous areas are mostly based on low level textural features of the tissue, whose semantic level is far from the visual appearance that histopathologists consider during their analysis. In order to bridge the semantic gap between histopathology and machine representation, we propose an algorithm for the detection of cancerous regions in lung and bladder adenocarcinoma samples, based on a supervised multi-level representation directly linked to histopathological characteristics. Instead, our unsupervised clustering method performs a segmentation of the histopathology structures according to their visual appearance through a similarity metric based on histograms of samples in the Lab perceptive color space. This permits to increase the sensitivity of the supervised approach by extending the regions (i.e., hits) it detects. We validated the accuracy of the proposed segmentation approach, using a group of ten users using 40 histopathology cases, showing a good response. The experiments, performed using the ground truth provided by a board of certified experts on different samples of adenocarcinoma (graded G1), prove the effectiveness of our approach both in terms of sensitivity and precision in detecting suspicious regions. Our algorithm is currently under testing on more samples and different cancerous histotypes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The processing of multi-gigapixel virtual histology slides presents a computationally intensive and time consuming task. Common tiled TIFF slide formats, such as those used by Aperio [1], contain inherent header information that can be used to rapidly locate tissue regions for cervical intraepithelial neoplasia (CIN) diagnosis. Tiles used in these formats are individually compressed subsections of the virtual slide, whose compression ratio varies based on their individual content. This paper discusses a method that exploits this information to rapidly identify regions of interest in an iterative process to locate epithelial tissue. These regions are decompressed using a multi-core CPU, from which a Compute Unified Device Architecture (CUDA) enabled GPU rapidly generates features and Support Vector Machine (SVM) decisions. SVM classifier results are used in a post-processing scheme to remove apparently spurious misclassifications. The mean overall execution time when using a high-end desktop PC, together with a GTX 560 GPU, is roughly 3 seconds per gigapixel, while maintaining the area under an ROC curve above 0.9 when classifying squamous epithelium versus other tissues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnosis of hepatocellular carcinoma (HCC) on the basis of digital images is a challenging problem because, unlike gastrointestinal carcinoma, strong structural and morphological features are limited and sometimes absent from HCC images. In this study, we describe the classification of HCC images using statistical distributions of features obtained from image analysis of cell nuclei and hepatic trabeculae. Images of 130 hematoxylin-eosin (HE) stained histologic slides were captured at 20X by a slide scanner (Nanozoomer, Hamamatsu Photonics, Japan) and 1112 regions of interest (ROI) images were extracted for classification (551 negatives and 561 positives, including 113 well-differentiated positives). For a single nucleus, the following features were computed: area, perimeter, circularity, ellipticity, long and short axes of elliptic fit, contour complexity and gray level cooccurrence matrix (GLCM) texture features (angular second moment, contrast, homogeneity and entropy). In addition, distributions of nuclear density and hepatic trabecula thickness within an ROI were also extracted. To represent an ROI, statistical distributions (mean, standard deviation and percentiles) of these features were used. In total, 78 features were extracted for each ROI and a support vector machine (SVM) was trained to classify negative and positive ROIs. Experimental results using 5-fold cross validation show 90% sensitivity for an 87.8% specificity. The use of statistical distributions over a relatively large area makes the HCC classifier robust to occasional failures in the extraction of nuclear or hepatic trabecula features, thus providing stability to the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the Nottingham Grading System for breast cancer grading, nuclear pleomorphism is one of the three criteria along with tubule formation and mitotic count taken into account in grading procedure. Nuclear pleomorphism is largely based on the information about variation of nuclei appearance, size, and shape. Nuclei extraction from breast cancer images is thus necessary for cancer grading, and has become one of the major problem in the domain of automatic image analysis. Recently, several papers have shown that stochastic Marked Point Processes are a promising tool for dealing with this kind of problems. In this paper, we will present visual and quantitative comparisons of results obtained with two Marked Point Process based models with two types of objects used, and analyse the advantages of each of them. We will first show a way to detect nuclei position and size using ellipse-shaped objects. Ellipses give a good approximation of nuclei shape size in a fast way. We will then use arbitrarily-shaped objects to delineate more precisely nuclei contours. As this method is a data driven method, we will discuss the best data energy to use for each kind of objects, based on common criteria of the nuclei in any cancer grade. Results are obtained using Haematoxylin and Eosin (H and E) stained breast cancer slide images. As appearance, size and shape may vary a lot depending on the cancer grade, we will present results for different grades and compare our methods for each of them. The quantitative quality of obtained results will be shown vie comparing with a ground truth segmentations given by pathologists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided
radiotherapy, non-invasive diagnosis, and treatment planning. Although numerous researches have been done in
developing various medical image fusion algorithms, the disadvantage of these approaches is that they lack universality in dealing with different kinds of medical images. To address this problem, we have proposed a novel method of medical image fusion using the spiking cortical model (SCM) for the first time. In the paper, the mathematical model of SCM is firstly described, and then image fusion algorithm with SCM is introduced in detail. To show that the SCM based fusion method can deal with multimodal medical images, we have used three pairs of medical images with different modalities in the simulation experiments and made comparisons among the proposed method and the state-of-art fusion methods such as Laplacian pyramid, Contrast pyramid, Morphological pyramid and Ratio pyramid. The performance of various methods is investigated using such image assessment metrics as Mutual Information (MI), the edge preservation values (QAB/F), the Local Structural Similarity (LSSIM) and the Universal Image Quality Index (UIQI). The experimental results show that our proposed method outperforms other methods in both visual effect and objective evaluation. It demonstrates that the SCM based method is a highly effective method for multi-modal medical image fusion due to its versatility and stability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis of hepatic tissue structure is required for quantitative assessment of liver histology. Especially, a cord-like
structure of liver cells, called trabecura, has important information in the diagnosis of hepatocellular carcinoma (HCC).
However, the extraction of trabeculae is thought to be difficult because liver cells take on various colors and appearances depending on tissue conditions. In this paper, we propose an approach to extract trabeculae from images of hematoxyline and eosin stained liver tissue slide by extracting the rest of trabeculae: sinusoids and stromal area. The sinusoids are simply extracted based on the color information, where the image is corrected by an orientation selective filtering before segmentaion. The stromal area mainly consists of fiber, and often includes lymphocytes densely. Therefore, in the proposed method, fiber region and lymphocytes are extracted separately, then, stromal region is determined based on the extracted results. The determination of stroma is performed based on superpixels, to obtain precise boundaries. Once the regions of sinusoids and stroma are obtained, trabeculae can be segmented as the remaining region. The proposed method was applied to 10 test images of normal and HCC liver tissues, and the results were evaluated based on the manual segmentation. As a result, we confirmed that both sensitivity and specificity of the extraction of trabeculae reach around 90%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.