Most colon CAD (computer aided detection) software products, especially commercial products, are designed for use by
radiologists in a clinical environment. Therefore, those features that effectively assist radiologists in finding polyps are
emphasized in those tools. However, colon CAD researchers, many of whom are engineers or computer scientists, are
working with CT studies in which polyps have already been identified using CT Colonography (CTC) and/or optical
colonoscopy (OC). Their goal is to utilize that data to design a computer system that will identify all true polyps with no
false positive detections. Therefore, they are more concerned with how to reduce false positives and to understand the
behavior of the system than how to find polyps. Thus, colon CAD researchers have different requirements for tools not
found in current CAD software. We have implemented a module in 3D Slicer to assist these researchers. As with clinical
colon CAD implementations, the ability to promptly locate a polyp candidate in a 2D slice image and on a 3D colon
surface is essential for researchers. Our software provides this capability, and uniquely, for each polyp candidate, the
prediction value from a classifier is shown next to the 3D view of the polyp candidate, as well as its CTC/OC finding.
This capability makes it easier to study each false positive detection and identify its causes. We describe features in our
colon CAD system that meets researchers' specific requirements. Our system uses an open source implementation of a
3D Slicer module, and the software is available to the pubic for use and for extension
(http://www2.wfubmc.edu/ctc/download/).
A fully automated computerized polyp detection (CPD) system is presented that takes DICOM images from CT scanners and provides a list of detected polyps. The system comprises three stages, segmentation, polyp candidate generation (PCG), and false positive reduction (FPR). Employing computer tomographic colonography (CTC), both supine and prone scans are used for improving detection sensitivity. We developed a novel and efficient segmentation scheme. Major shape features, e.g., the mean curvature and Gaussian curvature, together with a connectivity test efficiently produce polyp candidates. We select six shape features and introduce a multi-plane linear discriminant function (MLDF) classifier in our system for FPR. The classifier parameters are empirically assigned with respect to the geometric meanings of a specific feature. We have tested the system on 68 real subjects, 20 positive and 48 negative for 6 mm and larger polyps from colonoscopy results. Using a patient-based criterion, 95% accuracy and 31% specificity were achieved when 6 mm was used as the cutoff size, implying that 15 out of 48 healthy subjects could avoid OC. One 11 mm polyp was missed by CPD but was also not reported by the radiologist. With a complete polyp database, we anticipate that a maximum a posteriori probability (MAP) classifier tuned by supervised training will improve the detection performance. The execution time for both scans is about 10-15 minutes using a 1 GHz PC running Linux. The system may be used standalone, but is envisioned more as a part of a computer-aided CTC screening that can address the problems with a fully automatic approach and a fully physician approach.
We use three features, the intensity, texture and motion to obtain robust results for segmentation of intracoronary ultrasound images. Using a parameterized equation to describe the lumen-plaque and media-adventitia boundaries, we formulate the segmentation as a parameter estimation through a cost functional based on the posterior probability, which can handle the incompleteness of the features in ultrasound images by employing outlier detection.
The problems of high resolution image reconstruction are approached in this project as an optimization problem. Assuming an ideal image is blurred, noise corrupted, and sub-sampled to produce the measured image, we pose the estimation of the enlarged image as a maximum-a-posteriori (MAP) restoration process and the mean field annealing optimization technique is used to solve the multi-model objective function. The iterative interpolation process incorporates two terms into its objective function. The first term is the 'noise' term which models the burring and subsampling of the acquisition system. By using the system point spread function and the noise characteristics, the measured pixels at the sub-sampled-grid are mapped into the grid of the original image. A second term, the a-priori term is formulated to fore the prior constraints such as noise smoothing and edge preserving into the interpolation process. The resulted image is a noise reduced, deblurred, and enlarged image. The proposed algorithm are used to zoom several medical images, along with existing techniques such as pixel replication, linear interpolation, and spectrum extrapolation. The resulted images indicate that the proposed algorithm can smooth noise extensively while keeping the image features. The images zoomed by other methods suffer from noise and look less favorable in comparison.
The filtered-backprojection (FBP) algorithm and statistical model based iterative algorithms such as the maximum likelihood (ML) reconstruction are the two major classes of tomographic reconstruction method. The FBP method is widely used in clinical setting while iterative methods have attracted research interests in the past decade. In this paper we study the performance of the FBP and the ML methods using simulated projection data. The results indicate that the best image that the FBP or the ML algorithm can generate is the compromise of image smoothness and sharpness. The filter cutoff frequency for the FBP algorithm or the number of iterations for the ML algorithm has to be selected carefully.
Current positron emission tomography techniques for the measurement of cerebral blood flow assume that voxels represent pure material regions. In this work, a method is presented which utilizes anatomical information from a high resolution modality such as MRI in conjunction with a multicompartment extension of the Kety model to obtain intravoxel, tissue specific blood flow values. In order to evaluate the proposed method, noisy time activity curves (TACs) were simulated representing different combinations of gray matter, white matter and CSF, and ratios of gray to white matter blood flow. In all experiments it was assumed that registered MR data supplied the number of materials and the fraction of each present. For each TAC, three experiments were run. In the first it was assumed that the fraction of each material determined by MRI was correct, and, in the second two, that the value was either too high or too low. Using the tree annealing method, material flows were determined which gave the best fit of the model to the simulated TAC data. The results indicate that the accuracy of the method is approximately linearly related to the error in material fraction estimated for a voxel.
The method of alternating projections onto convex (POCS) sets is used to process images for both compression and enhancement. Convex sets are derived that define certain desirable characteristics of the images for both applications. A new image is produced using POCS which satisfies these characteristics thereby relaxing others. For enhancement, images are produced that display more of the information desired such as adjacent pixel differnces. For compression, relaxing the characteristics not deemed important allows for improved coding efficiency. POCS provides the ability to define the problem piecewise, to apply as many or few constaints as desired, and to easily implement the algorithm by separately deriving and implementing the projection operators.
Finite mixture density (FMD) based approaches to medical image classification or quantification problems have received considerable interest lately. In this paper, we will show through use of computer simulations that as the resolution of the underlying imaging modality decreases (its full width at half maximum (FWHM) increases) the successful application of an FMD approach will become increasingly difficult. A 19 slice computer phantom of the human brain was used. This phantom, generated from MR images of a human brain, is composed of gray matter, white matter, and cerebrospinal fluid regions. Image sets were generated using Gaussian kernels of various sizes and FWHM's. The distributions of single and multiple components pixels were then generated from these image sets. A planar acquisition of a single slice brain phantom is also presented for comparison. It is shown that, with decreasing image resolution, a major weakness of the FMD approach is its inability to incorporate spacial information. Decreasing resolution with respect to object size results in an increasing number of partial volume pixels with resulting effects on its FMD components.
Research is presented in which white matter lesions are quantified using MRI data on cardiac surgery patients. Various methods of quantification are presented including finite mixture density analysis of various MRI parameters, K-means, and principal components analysis. Pre- and post-operative data sets are studied for each patient to determine the change in lesion load due to surgery. The various methods are compared and the differences are indicated on both registered and unregistered data sets. Agreement among the methods is not good in many instances and at times show an inverse correlation. Images and data showing the gray scale distributions are presented.
A new postprocessing method of correcting for respiratory motion induced artifacts in MRI is presented. The motion of the chest during respiration is modeled as a combination of translation and dilation. Displacements of the chest wall are tracked via a thin, MR-sensitive plate placed on the patient's chest during a scan. Scanning with phase encoding left/right (L/R) and frequency encoding anterior/posterior (A/P) causes the motion artifacts to be repeated in the L/R direction, thus not overlapping on the plate. By performing the inverse A/P Fourier transform, the resulting hybrid space data has A/P spatial data and L/R spatial frequency data, in which the motion of the plate is clearly visible as a nearly periodic waveform. Modeling the motion of the chest wall as an equal combination of translation and dilation allows corrections to the image to be make in k- space using properties of the Fourier transform and the measured displacement data. Noticeable reduction of the intensity of the motion artifacts is achieved, indicating the validity of the motion model and tracking method.
In the reconstruction of positron emission tomography images, each slice of the image volume is individually reconstructed from a sinogram, in which the statistics of the data elements are Poisson and the image data is hidden by the mechanism of projection. We propose a method of image reconstruction which incorporates the given data set and also reflects the a prior knowledge that the image consists of smooth noiseless regions that are separated by sharp edges. This method uses both maximum likelihood and maximum a posteriori techniques in a manner that is similar to techniques used by others, but our method incorporates a bounded prior term and adaptive annealing techniques. These advancements prevent excessive smoothing and address the difficulties presented by parameter selection and image convergence.
This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.
An algorithm is described which segments magnetic resonance images while removing the noise from the images without blurring or other distortion of edges. The problem of segmentation and noise removal is posed as a restoration of an uncorrupted image, given additive white Gaussian noise and a segmentation cost. The problem is solved using a strategy called Mean Field Annealing. An a priori statistical model of the image, which includes the region classification, is chosen which drives the minimization toward solutions which are locally homogeneous and globally classified. Application of the algorithm to brain and knee images is presented.
KEYWORDS: Signal to noise ratio, Heart, Magnetic resonance imaging, Image filtering, Motion estimation, Image processing, Error analysis, Motion models, Linear filtering, Data modeling
This paper describes the use of magnetic resonance imaging to produce quantitative information about heart motion. Motion estimation techniques utilizing convex set projections and simple block shift matching are applied to determine direction and magnitude of heart motion. These methods are tested and compared using simulated motion on a single MRI frame and actual MRI cine data. The simulated motion is in the form of translational only and linear velocity fields. Motion estimation is performed using image pairs with and without additive noise. Results are given which quantify the effectiveness of the algorithms.
We have implemented a voice recognition interface using a Dragon Systems VoiceScribe-1000 Speech
Recognition system installed on an AT&T 6310 personal computer. The Dragon Systems DragonKey
software allows the user to emulate keyboard functions using the speech recognition system and replaces
the presently used bar code system. The software supports user voice training, grammar design and
compilation, as well as speech recognition.
We have successfully integrated this voice interface in the clinical report generation system for most
standard mammography studies. We have found that the voice system provides a simple, user-friendly
interface which is more widely accepted in a medical environment because of its similarities to tradition
dictation. Although the system requires some initial time for voice training, it avoids potential delays in
transcription and proofreading. This paper describes the design and implementation of this voice
recognition interface in our department.
KEYWORDS: Quantization, Signal to noise ratio, Image quality, Medical imaging, Magnetic resonance imaging, Liver, Magnetism, Signal processing, Quality measurement, Receivers
The affects of quantization noise in magnetic resonance images (MRI) were studied, and simple modifications are
shown to give significant improvement in subjective image quality and in some quantitative measurements. A liver
phantom based on a MR liver image was created to simulate the effects of quantizing the MR signal. The MR signal
which would be generated by this phantom was characterized and used to study quantization effects as well as to
help in developing a more efficient quantization scheme.
Uniform quantization of the signal was simulated to determine the effects of quantization noise on the liver
phantom. Quantitative measurements using SNR and detectability were made and used as a basis of comparison
for similar measurements utilizing other quantizers including uniform quantization with quantizer overload and
logarithmic quantization. Quantitative measurements were again made and compared to the full range uniform
quantizer. Simulations were performed without additive noise, and results are given in the form of graphs, tables,
and actual images. Simulations were also performed with noisy data but are not reported in detail in this paper.
Q uantitative measurements were consistent in most cases and agreed well with subjective evaluations.
It was found that uniform quantization noise can significantly affect image quality. It was also determined, by
using a logarithmic quantizer or by simple overloading of the uniform quantizer, that significant improvements in
image quality can be achieved. The results are extensible to other image collection systems, particularly those with
high dynamic range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.