Alzheimer’s Disease (AD) is a devastating neurodegenerative disease. Recent advances in tau-positron emission tomography (PET) imaging allow quantitating and mapping out the regional distribution of one important hallmark of AD across the brain. There is a need to develop machine learning (ML) algorithms to interrogate the utility of this new imaging modality. While there are some recent studies showing promise of using ML to differentiate AD patients from normal controls (NC) based on tau-PET images, there is limited work to investigate if tau-PET, with the help of ML, can facilitate predicting the risk of converting to AD while an individual is still at the early Mild Cognitive Impairment (MCI) stage. We developed an early AD risk predictor for subjects with MCI based on tau-PET using Machine Learning (ML). Our ML algorithms achieved good accuracy in predicting the risk of conversion to AD for a given MCI subject. Important features contributing to the prediction are consistent with literature reports of tau susceptible regions. This work demonstrated the feasibility of developing an early AD risk predictor for subjects with MCI based on tau-PET and ML.
Multi-modality images usually exist for diagnosis/prognosis of a disease, such as Alzheimer’s Disease (AD), but with different levels of accessibility and accuracy. MRI is used in the standard of care, thus having high accessibility to patients. On the other hand, imaging of pathologic hallmarks of AD such as amyloid-PET and tau-PET has low accessibility due to cost and other practical constraints, even though they are expected to provide higher diagnostic/prognostic accuracy than standard clinical MRI. We proposed Cross-Modality Transfer Learning (CMTL) for accurate diagnosis/prognosis based on standard imaging modality with high accessibility (mod_HA), with a novel training strategy of using not only data of mod_HA but also knowledge transferred from the model based on advanced imaging modality with low accessibility (mod_LA). We applied CMTL to predict conversion of individuals with Mild Cognitive Impairment (MCI) to AD using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) datasets, demonstrating improved performance of the MRI (mod_HA)-based model by leveraging the knowledge transferred from the model based on tau-PET (mod_LA).
An emerging trend in AD research is brain network development including graphic metrics and graph mining techniques. To construct a brain structural network, Diffusion Tensor Imaging (DTI) in conjunction with T1 weighted Magnetic Resonance Imaging (MRI) can be used to isolate brain regions as nodes, white matter tracts as the edge, and the density of the tracts as the weight to the edge. To study such network, its sub-network is often obtained by excluding unrelated nodes or edges. Existing research has heavily relied on domain knowledge or single-thresholding individual subject based network metrics to identify the sub network. In this research, we develop a bi-threshold frequent subgraph mining method (BT-FSG) to automatically filter out less important edges in responding to the clinical questions. Using this method, we are able to discover a subgraph of human brain network that can significantly reveal the difference between cognitively unimpaired APOE-4 carriers and noncarriers based on the correlations between the age vs. network local metric and age vs. network or global metric. This can potentially become a brain network marker for evaluating the AD risks for preclinical individuals.
By taking advantages of both mammography and breast MRI, contrast-enhanced digital mammography (CEDM) has emerged as a new promising imaging modality to improve efficacy of breast cancer screening and diagnosis. The primary objective of study is to develop and evaluate a new computer-aided detection and diagnosis (CAD) scheme of CEDM images to classify between malignant and benign breast masses. A CEDM dataset consisting of 111 patients (33 benign and 78 malignant) was retrospectively assembled. Each case includes two types of images namely, low-energy (LE) and dual-energy subtracted (DES) images. First, CAD scheme applied a hybrid segmentation method to automatically segment masses depicting on LE and DES images separately. Optimal segmentation results from DES images were also mapped to LE images and vice versa. Next, a set of 109 quantitative image features related to mass shape and density heterogeneity was initially computed. Last, four multilayer perceptron-based machine learning classifiers integrated with correlationbased feature subset evaluator and leave-one-case-out cross-validation method was built to classify mass regions depicting on LE and DES images, respectively. Initially, when CAD scheme was applied to original segmentation of DES and LE images, the areas under ROC curves were 0.7585±0.0526 and 0.7534±0.0470, respectively. After optimal segmentation mapping from DES to LE images, AUC value of CAD scheme significantly increased to 0.8477±0.0376 (p<0.01). Since DES images eliminate overlapping effect of dense breast tissue on lesions, segmentation accuracy was significantly improved as compared to regular mammograms, the study demonstrated that computer-aided classification of breast masses using CEDM images yielded higher performance.
Alzheimer’s Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient’s likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called “ADMultiImg” to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.
MRI protocols are instruction sheets that radiology technologists use in routine clinical practice for guidance (e.g., slice
position, acquisition parameters etc.). In Mayo Clinic Arizona (MCA), there are over 900 MR protocols (ranging across
neuro, body, cardiac, breast etc.) which makes maintaining and updating the protocol instructions a labor intensive effort.
The task is even more challenging given different vendors (Siemens, GE etc.). This is a universal problem faced by all
the hospitals and/or medical research institutions. To increase the efficiency of the MR practice, we designed and
implemented a web-based platform (eProtocol) to automate the management of MRI protocols. It is built upon a database
that automatically extracts protocol information from DICOM compliant images and provides a user-friendly interface to
the technologists to create, edit and update the protocols. Advanced operations such as protocol migrations from scanner
to scanner and capability to upload Multimedia content were also implemented. To the best of our knowledge, eProtocol
is the first MR protocol automated management tool used clinically. It is expected that this platform will significantly
improve the radiology operations efficiency including better image quality and exam consistency, fewer repeat
examinations and less acquisition errors. These protocols instructions will be readily available to the technologists during
scans. In addition, this web-based platform can be extended to other imaging modalities such as CT, Mammography, and
Interventional Radiology and different vendors for imaging protocol management.
This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.
KEYWORDS: Magnetic resonance imaging, Kidney, Sensors, 3D image processing, Image processing, Detection and tracking algorithms, 3D magnetic resonance imaging, Image segmentation, Blob detection, Tissues
The glomeruli of the kidney perform the key role of blood filtration and the number of glomeruli in a kidney is correlated with susceptibility to chronic kidney disease and chronic cardiovascular disease. This motivates the development of new technology using magnetic resonance imaging (MRI) to measure the number of glomeruli and nephrons in vivo. However, there is currently a lack of computationally efficient techniques to perform fast, reliable and accurate counts of glomeruli in MR images due to the issues inherent in MRI, such as acquisition noise, partial volume effects (the mixture of several tissue signals in a voxel) and bias field (spatial intensity inhomogeneity). Such challenges are particularly severe because the glomeruli are very small, (in our case, a MRI image is ~16 million voxels, each glomerulus is in the size of 8~20 voxels), and the number of glomeruli is very large. To address this, we have developed an efficient Hessian based Difference of Gaussians (HDoG) detector to identify the glomeruli on 3D rat MR images. The image is first smoothed via DoG followed by the Hessian process to pre-segment and delineate the boundary of the glomerulus candidates. This then provides a basis to extract regional features used in an unsupervised clustering algorithm, completing segmentation by removing the false identifications occurred in the pre-segmentation. The experimental results show that Hessian based DoG has the potential to automatically detect glomeruli,from MRI in 3D, enabling new measurements of renal microstructure and pathology in preclinical and clinical studies.
DICOM Index Tracker (DIT) is an integrated platform to harvest rich information available from Digital Imaging and Communications in Medicine (DICOM) to improve quality assurance in radiology practices. It is designed to capture and maintain longitudinal patient-specific exam indices of interests for all diagnostic and procedural uses of imaging modalities. Thus, it effectively serves as a quality assurance and patient safety monitoring tool. The foundation of DIT is an intelligent database system which stores the information accepted and parsed via a DICOM receiver and parser. The database system enables the basic dosimetry analysis. The success of DIT implementation at Mayo Clinic Arizona calls for the DIT deployment at the enterprise level which requires significant improvements. First, for geographically distributed multi-site implementation, the first bottleneck is the communication (network) delay; the second is the scalability of the DICOM parser to handle the large volume of exams from different sites. To address this issue, DICOM receiver and parser are separated and decentralized by site. To facilitate the enterprise wide Quality Assurance (QA), a notable challenge is the great diversities of manufacturers, modalities and software versions, as the solution DIT Enterprise provides the standardization tool for device naming, protocol naming, physician naming across sites. Thirdly, advanced analytic engines are implemented online which support the proactive QA in DIT Enterprise.
A novel three stage Semi-Supervised Learning (SSL) approach is proposed for improving performance of computerized breast cancer analysis with undiagnosed data. These three stages include: (1) Instance selection, which is barely used in SSL or computerized cancer analysis systems, (2) Feature selection and (3) Newly designed ‘Divide Co-training’ data labeling method. 379 suspicious early breast cancer area samples from 121 mammograms were used in our research. Our proposed ‘Divide Co-training’ method is able to generate two classifiers through split original diagnosed dataset (labeled data), and label the undiagnosed data (unlabeled data) when they reached an agreement. The highest AUC (Area Under Curve, also called Az value) using labeled data only was 0.832 and it increased to 0.889 when undiagnosed data were included. The results indicate instance selection module could eliminate untypical data or noise data and enhance the following semi-supervised data labeling performance. Based on analyzing different data sizes, it can be observed that the AUC and accuracy go higher with the increase of either diagnosed data or undiagnosed data, and reach the best improvement (ΔAUC = 0.078, ΔAccuracy = 7.6%) with 40 of labeled data and 300 of unlabeled data.
The concept of an integrated or synthesized supply chain is a strategy for managing today's globalized and customer driven supply chains in order to better meet customer demands. Synthesizing individual entities into an integrated supply chain can be a challenging task due to a variety of factors including conflicting objectives, mismatched incentives and constraints of the individual entities. Furthermore, understanding the effects of disruptions occurring at any point in the system is difficult when working toward synthesizing supply chain operations. Therefore, the goal of this research is to present a modeling methodology to manage the synthesis of a supply chain by linking hierarchical levels of the system and to model and analyze disruptions in the integrated supply chain. The contribution of this research is threefold: (1) supply chain systems can be modeled hierarchically (2) the performance of synthesized supply chain system can be evaluated quantitatively (3) reachability analysis is used to evaluate the system performance and verify whether a specific state is reachable, allowing the user to understand the extent of effects of a disruption.
KEYWORDS: Web services, Data storage, Distributed computing, Standards development, Databases, Telecommunications, Product engineering, Information fusion, Manufacturing, Data processing
Due to increased competition in today's global environment, companies are embracing Collaborative Product Development (CPD) as a path to success in new product development. CPD is an engineering process that involves decision-making through iterative communication and coordination among product designers throughout the lifecycle of the product. The high level of collaboration and communication of a CPD environment requires a robust Distributed Information System. In this paper, we review existing research related to CPD and propose an information framework termed VE4PD based upon the integration of Web services and agent technologies to manage the CPD process across the product lifecycle. VE4PD maintains the information consistency and enables the proactive information update to facilitate a faster and more efficient CPD. In addition, the use of agent technology helps to leverage the integration of server applications and client applications. An implementation system is developed to validate the application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.