Evaluating the effectiveness of fusion systems in a multi-sensor, multi-platform environment has historically been a
tedious and time-consuming process. Typically it has been necessary to perform data collection and analysis in different
code baselines, which requires error-prone data format conversions and manual spatial-temporal data registration. The
Metrics Assessment System (MAS) has been developed to provide automated, real-time metrics calculation and display.
MAS presents metrics in tables, graphs, and overlays within a tactical display. Comparative assessments are based on
truth tracks, including position, velocity, and classification information. The system provides tabular history drill-down
for each metric and each track. MAS, which is currently being evaluated on anti-submarine warfare scenarios, can be a
valuable tool both in objective evaluation performance of tracking and fusion algorithms and in identifying asset and
target interactions that cause the fused tracks to generate from the true ones.
Faculty from the University of Tennessee at Chattanooga and the University of Tennessee College of Medicine, Chattanooga Unit, have used data mining techniques and neural networks to examine a set of fourteen features, data items, and HUMINT assessments for 2,148 emergency room patients with symptoms possibly indicative of Acute Coronary Syndrome. Specifically, the authors have generated Bayesian networks describing linkages and causality in the data, and have compared them with neural networks. The data includes objective information routinely collected during triage and the physician's initial case assessment, a HUMINT appraisal. Both the neural network and the Bayesian network were used to fuse the disparate types of information with the goal of forecasting thirty-day adverse patient outcome. This paper presents details of the methods of data fusion including both the data mining techniques and the neural network. Results are compared using Receiver Operating Characteristic curves describing the outcomes of both methods, both using only objective features and including the subjective physician's assessment. While preliminary, the results of this continuing study are significant both from the perspective of potential use of the intelligent fusion of biomedical informatics to aid the physician in prescribing treatment necessary to prevent serious adverse outcome from ACS and as a model of fusion of objective data with subjective HUMINT assessment. Possible future work includes extension of successfully demonstrated intelligent fusion methods to other medical applications, and use of decision level fusion to combine results from data mining and neural net approaches for even more accurate outcome prediction.
This paper describes development and testing of a program that provides a quantitative metric for the comparison of night vision fusion algorithms. The user enters into the Metric Program the names of a thermal file, a vision file and the corresponding fused image file. The program assigns a fusion rating to the algorithm based on the following four quantitative tests: information content (ic), vision retention (vr), thermal retention (tr), and the bar to detect black segments. In ic the information content of the fused image is compared with a weighted sum of the vision and thermal images. In vr the number of faint lights that the fused image failed to incorporate is counted. In tr the number of pixels from the thermal file included in the fused image is determined. With some fusion algorithms if one of the sensors is blocked, a black segment appears in that area in the fused image, thus losing the information from the unblocked sensor. To test for this the Metric Program creates a thermal file with three horizontal black bars. The program then allows the user to call the executable file of the algorithm under test. Then the user is asked to examine the fused image. If three pitch-black horizontal bars appear on the image, the algorithm fails the test. While the bar test is invariant to the vision/thermal image pair used, the other tests are not. For this reason it is suggested that an algorithm should be tested with 5 or 6 different image pairs and a mean fusion rating calculated. The program is used to evaluate several different algorithms. Day vision fusion algorithms are also tested.
In both military and civilian applications, increasing interest is being shown in fusing IR and vision images for improved situational awareness. In previous work, the authors have developed a fusion method for combining the thermal and vision images into a single image emphasizing the most salient features of the surrounding environment. This approach is based on the assumption that although the thermal and vision data are uncorrelated, they are complementary and can be fused using a suitable disjunctive function. This paper, as a continuation of that work, will describe the development of an information based real-time data level fusion method. In addition, applicability of the algorithms that we developed for data level fusion to feature level techniques will be investigated.
Intelligent processing techniques are applied to a ballistic missile defense (BMD) application, focused on classifying the objects in a typical threat complex, using fused IR and ladar sensors. These techniques indicate the potential to improve designation robustness against 'off-normal'/unexpected conditions, or when sensor data or classifier performance degrades. A fused sensor discrimination (FuSeD) simulation testbed was assembled for designation experiments, to evaluate test and simulation data, assess intelligent processor and classification algorithms, and evaluate sensor performance. Results were produced for a variety of neural net and other nonlinear classifiers, yielding high designation performance and low false alarm rates. Most classifiers yield a few percent in false alarm rate; rates are further improved when multiple techniques are applied vi a majority based fusion technique. Example signatures, features, classifier descriptions, intelligent controller design, and architecture are included. Work was performed for the discriminating interceptor technology program (DITP).
This paper describes a project that was sponsored by the U.S. Army Space and Strategic Defense Command (USASSDC) to develop, test, and demonstrate sensor fusion algorithms for target recognition. The purpose of the project was to exploit the use of sensor fusion at all levels (signal, feature, and decision levels) and all combinations to improve target recognition capability against tactical ballistic missile (TBM) targets. These algorithms were trained with simulated radar signatures to accurately recognize selected TBM targets. The simulated signatures represent measurements made by two radars (S-band and X- band) with the targets at a variety of aspect and roll angles. Two tests were conducted: one with simulated signatures collected at angles different from those in the training database and one using actual test data. The test results demonstrate a high degree of recognition accuracy. This paper describes the training and testing techniques used; shows the fusion strategy employed; and illustrates the advantages of exploiting multi-level fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.