Knee-related injuries including meniscal tears are common in both young athletes and the aging population and require
accurate diagnosis and surgical intervention when appropriate. With proper techniques and radiologists' experienced
skills, confidence in detection of meniscal tears can be quite high. However, for radiologists without musculoskeletal
training, diagnosis of meniscal tears can be challenging. This paper develops a novel computer-aided detection (CAD)
diagnostic system for automatic detection of meniscal tears in the knee. Evaluation of this CAD system using an
archived database of images from 40 individuals with suspected knee injuries indicates that the sensitivity and
specificity of the proposed CAD system are 83.87% and 75.19%, respectively, compared to the mean sensitivity and
specificity of 77.41% and 81.39%, respectively obtained by experienced radiologists in routine diagnosis without using
the CAD. The experimental results suggest that the developed CAD system has great potential and promise in automatic
detection of both simple and complex meniscal tears of knees.
Agreement is estimated by comparing correlated/paired scores (e.g. the scores from two doctors reading the same set
of images), such as the correlation coefficient and measures of concordance. Some variance estimation techniques for
these measures are also available in the literature. In this work, we compared four agreement measures: the widely used
Pearson's product moment correlation coefficient, Kendall's tau, and two measures that are generalizations of AUC, the
area under the receiver operating characteristics (ROC) curve. The generalization allows for ordinal truth that is
polytomous (multi-state) or even continuous instead of just binary, and thus AUC is a special case.
We investigate how these measures behave in a multi-reader multi-case (MRMC) simulation experiment as we change
the intrinsic correlation and number of rating levels. We also investigate a few variance estimation techniques for these
measures that are available in the literature. These agreement measures will help investigators developing model
observers to compare their models against a human on a case-by-case basis instead of with a summary figure of merit
that requires and is limited by binary truth, like AUC. The model observer AUC can equal the human observer AUC,
while making very different decisions on a case-by-case basis.
Endmember extraction has received considerable interest in recent years. Of particular interest is the Pixel Purity Index (PPI) because of its publicity and availability in ENVI software. There are also many variants of the PPI have been developed. Among them is an interesting endmember extraction algorithm (EEA), called vertex component analysis (VCA) developed by Dias and Nascimento who extend the PPI to a simplex-based EEA while using orthogonal subspace projection (OSP) as a projection criterion rather than simplex volume used by another well-known EEA, N-finder algorithm (N-FINDR) developed by Winter. Interestingly, this paper will show that the VCA is essentially the same algorithm, referred to as Automatic Target Generation Process (ATGP) recently developed for automatic target detection and classification by Ren and Chang except the use of the initial condition to initialize the algorithm. In order to substantiate our findings, experiments using synthetic and real images are conducted for a comparative study and analysis.
KEYWORDS: Detection and tracking algorithms, Minerals, Interference (communication), Error analysis, Data modeling, Signal processing, Signal detection, Signal to noise ratio, Independent component analysis, Array processing
A recently introduced concept, virtual dimensionality (VD) has been shown promise in many applications of hyperspectral data exploitation. It was originally developed for estimating number of spectrally distinct signal sources. This paper explores utility of the VD from various signal processing perspectives and further investigates four techniques, Gershgorin radius (GR), orthogonal projection subspace (OSP), signal subspace estimation (SSE), Neyman-Pearson detection (NPD), to be used to estimate the VD. In particular, the OSP-based VD estimation technique is new and has several advantages over other methods. In order to evaluate their performance, a comparative study and analysis is conducted via synthetic and real image experiments.
Knee-related injuries, including meniscal tears, are common in young athletes and require accurate diagnosis and
appropriate surgical intervention. Although with proper technique and skill, confidence in the detection of meniscal
tears should be high, this task continues to be a challenge for many inexperienced radiologists. The purpose of our study
was to automate detection of meniscal tears of the knee using a computer-aided detection (CAD) algorithm. Automated
segmentation of the sagittal T1-weighted MR imaging sequences of the knee in 28 patients with diagnoses of meniscal
tears was performed using morphologic image processing in a 3-step process including cropping, thresholding, and
application of morphological constraints. After meniscal segmentation, abnormal linear meniscal signal was extracted
through a second thresholding process. The results of this process were validated by comparison with the interpretations
of 2 board-certified musculoskeletal radiologists. The automated meniscal extraction algorithm process was able to
successfully perform region of interest selection, thresholding, and object shape constraint tasks to produce a convex
image isolating the menisci in more than 69% of the 28 cases. A high correlation was also noted between the CAD
algorithm and human observer results in identification of complex meniscal tears. Our initial investigation indicates
considerable promise for automatic detection of simple and complex meniscal tears of the knee using the CAD
algorithm. This observation poses interesting possibilities for increasing radiologist productivity and confidence,
improving patient outcomes, and applying more sophisticated CAD algorithms to orthopedic imaging tasks.
An endmember is an idealized, pure signature for a class and provides crucial information for hyperspectral image
analysis. Recently, endmember extraction has received considerable attention in hyperspectral imaging due to
significantly improved spectral resolution where the likelihood of a hyperspectral image pixel uncovered by a
hyperspectral image sensor as an endmember is substantially increased. Many algorithms have been proposed for this
purpose. One great challenge in endmember extraction is the determination of number of endmembers, p required for an
endmember extraction algorithm (EEA) to generate. Unfortunately, this issue has been overlooked and avoided by
making an empirical assumption without justification. However, it has been shown that an appropriate selection of p is
critical to success in extracting desired endmembers from image data. This paper explores methods available in the
literature that can be used to estimate the value, p. These include the commonly used eigenvalue-based energy method,
An Information criterion (AIC), Minimum Description Length (MDL), Gershgorin radii-based method, Signal Subspace
Estimation (SSE) and Neyman-Pearson detection method in detection theory. In order to evaluate the effectiveness of
these methods, two sets of experiments are conducted for performance analysis. The first set consists of synthetic imagebased
simulations which allow us to evaluate their performance with apriori knowledge, while the second set
comprising of real hyperspectral image experiments which demonstrate utility of these methods in real applications.
A hyperspectral imaging sensor images a scene using hundreds of contiguous spectral channels to uncover many substances that cannot be resolved by multspectral sensors with tens of discrete spectral channels. Many spectral measures used for target discrimination and identification in hyperspectral imagery have been derived directly from multsispectral imagery rather than from a hyperspectral imagery viewpoint. This paper demonstrates that on many occasions such spectral measures are generally not effective when it is applied to real hyperspectral data for discrimination and identification due to the fact that they do not take into account the very high sample spectral correlation (SSC) provided by hyperspectral sensors. In order to address this issue, two approaches, referred to as a priori sample spectral correlation (PR-SSC) and a posteriori SSC (PSSSC) are developed to account for spectral variability within real data to achieve better target discrimination and identification. While the former can be used to derive a family of a priori hyperspectral measures via orthogonal subspace projection (OSP) to eliminate interfering effects caused by undesired signatures, the latter results in a family of a posteriori hyperspectral measures that include sample covariance/correlation matrix as a posteriori information to increase ability in discrimination and identification. Interestingly, some well-known measures such as Euclidean distance (ED) and spectral angle mapper (SAM) can be shown to be special cases of the proposed PR-SSC and PS-SSC hyperspectral measures.
KEYWORDS: Signal detection, Sensors, Software development, Binary data, Signal processing, Detector development, Target detection, Image classification, Chemical analysis, Medical diagnostics
Under the U.S. Army sponsored Joint Service Agent Water Monitor (JSAWM) program, developing hand-held assays using tickets for chemical/biological agent detection has been of major interest. One of keys to success is to develop detection algorithms that not only can effectively detect the presence of various agents, but also can quantify the detected agents. This paper presents a recent development of detection software that can perform 3-dimensional (3D) receiver operating characteristics (ROC) analysis which is based on quantified agent concentration. The ROC curves have been widely used in communications, signal processing and medical communities to evaluate the effectiveness of a detection technique. It generally formulates a signal detection problem as a binary composite hypothesis testing problem with the null hypothesis and the alternative hypothesis represents the case of no signal and the case of signal presence respectively. The ROC curve is then plotted based on the detection probability (power) PD versus the false alarm probability, PF. Unfortunately, such a two-dimensional (2D) (PD,PF)-based ROC curve does not factor in the concentration detected in an agent signal which is a crucial parameter in chemical/biological agent detection. The proposed 3D ROC analysis is developed from such a need. It includes an additional parameter, referred to as threshold t, which is used to threshold the detected agent signal concentration. Consequently, a different value of t results in a different 2D ROC curve. In order to take into account the thresholding factor t, a 3D ROC curve is derived and plotted based on three parameters, (PD,PF,t). As a result of the 3D ROC curve, three 2D ROC curves can be also derived. One is the conventional 2D (PD,PF)-ROC curve. Another is a 2D (PD,t)-ROC curve which describes the relationship between PD and the threshold value t. A third one is a 2D (PF,t)-ROC curve which shows the effect of the threshold value t on PF. The utility of the proposed 3D ROC analysis will be demonstrated by the detection software developed by the UMBC for the tickets used in HHA for water monitoring.
There is an immediate need for the ability to detect, identify and quantify chemical and biological agents in water supplies during water point selection, production, storage, and distribution to consumers. Through a U.S. Army sponsored Joint Service Agent Water Monitor (JSAWM) program, based on hand-held assays that exist in a ticket format, we are developing new algorithms for automatic processing of tickets. In previous work, detection of control dots in the tickets was carried out by traditional image segmentation approaches such as Otsu's method and other entropy-based thresholding techniques. In experiments, it was found that the approaches above were sensitive to illumination effects in the camera reader. As a result, more robust, object-oriented approaches to detect the control dots are required. Mathematical morphology is a powerful technique for image analysis that focuses on the size and shape of the objects in the scene. In this work, we describe a novel application of morphological operations in identification of control dots in hand held assay ticket imagery. Such images were pre-processed by a light compensation algorithm prior to morphological analysis. The performance of the proposed approach is evaluated using Receiving Operating Characteristics (ROC) analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.