estimated 6.5 million patients in the United States are affected by chronic wounds, with more than 25 billion US dollars and countless hours spent annually for all aspects of chronic wound care. There is need to develop software tools to analyze wound images that characterize wound tissue composition, measure their size, and monitor changes over time. This process, when done manually, is time-consuming and subject to intra- and inter-reader variability. In this paper, we propose a method that can characterize chronic wounds containing granulation, slough and eschar tissues. First, we generate a Red-Yellow-Black-White (RYKW) probability map, which then guides the region growing segmentation process. The red, yellow and black probability maps are designed to handle the granulation, slough and eschar tissues, respectively found in wound tissues, while the white probability map is designed to detect the white label card for measurement calibration purpose. The innovative aspects of this work include: 1) Definition of a wound characteristics specific probability map for segmentation, 2) Computationally efficient regions growing on 4D map; 3) Auto-calibration of measurements with the content of the image. The method was applied on 30 wound images provided by the Ohio State University Wexner Medical Center, with the ground truth independently generated by the consensus of two clinicians. While the inter-reader agreement between the readers is 85.5%, the computer achieves an accuracy of 80%.
Brain cancer surgery requires intraoperative consultation by neuropathology to guide surgical decisions regarding the
extent to which the tumor undergoes gross total resection. In this context, the differential diagnosis between
glioblastoma and metastatic cancer is challenging as the decision must be made during surgery in a short time-frame
(typically 30 minutes). We propose a method to classify glioblastoma versus metastatic cancer based on extracting
textural features from the non-nuclei region of cytologic preparations. For glioblastoma, these regions of interest are
filled with glial processes between the nuclei, which appear as anisotropic thin linear structures. For metastasis, these
regions correspond to a more homogeneous appearance, thus suitable texture features can be extracted from these
regions to distinguish between the two tissue types. In our work, we use the Discrete Wavelet Frames to characterize the
underlying texture due to its multi-resolution capability in modeling underlying texture. The textural characterization is
carried out in primarily the non-nuclei regions after nuclei regions are segmented by adapting our visually meaningful
decomposition segmentation algorithm to this problem. k-nearest neighbor method was then used to classify the features
into glioblastoma or metastasis cancer class. Experiment on 53 images (29 glioblastomas and 24 metastases) resulted in
average accuracy as high as 89.7% for glioblastoma, 87.5% for metastasis and 88.7% overall. Further studies are
underway to incorporate nuclei region features into classification on an expanded dataset, as well as expanding the
classification to more types of cancers.
We investigate the use of a new binary coherent vector approach, integrated in a proposed content-based medical retrieval (CBMIR) system, to retrieve computed tomography (CT) brain images. Five types of hemorrhages consisting of 150 plain axial CT brain images are queried from a database of 2500 normal and abnormal CT brain images. Possible combinations of shape features are portrayed as feature vectors and are evaluated based on precision-recall plots. Solidity, form factor, equivalent circular diameter (ECD), and Hu moment are proposed as identifying features of intracranial hemorrhages in CT brain images. In addition to identifying hemorrhages, the proposed approach significantly improves the CBMIR system performance. This retrieval system can be widely useful due to rapid development in computer vision and computer database management, both of which motivated this application of CBMIR.
The paper presents research on a robust technique for texture-based image retrieval in multimedia museum collections. The aim is to be able to use a query image patch containing a single texture to retrieve images containing some area with similar texture to that in the query. A retrieval technique without the need for segmentation is presented. The algorithm uses a multiscale sub-image matching method together with an appropriate texture feature extractor. The multiscale sub-image matching is achieved by first decomposing each database image into a set of 64×64 pixel patches covering the entire image. The resolution of the database image is then rescaled to create sub-images corresponding to a larger scale. The process continues until the final resolution of the image is equal to some pre-determined value. Finally, a collection of sub-images corresponding to different image regions and scales is obtained. The final image feature vector consists of a collection of feature vectors corresponding to each sub-image. Several wavelet-based feature extractors are tested with the multiscale technique. From the experiments, it is found that the multiscale sub-image matching method is an efficient way to achieve effective texture retrieval without any need for segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.