PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Various fusion system architectures postulated and studied previously for environments with two and three data sources are further explored in this study to bring out the expanding scope for delineating the architecture options for multiple data source environments. A spectrum of single and multi-stage fusion architecture options are defined. The potential for such expansion of choices is illustrated using the scenario with four data sources as an example. Potential problem environments corresponding to this range of two to four data sources are identified. Various fusion logic strategies that can be brought to bear for the analysis of these fusion architecture options, when these fusion architectures are employed for Decisions In - Decision Out (DEI-DEO) fusion, are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applications of information fusion include cases that involve a large number of information sources. Methods developed in the context of few information sources may not, and often do not, scale well to cases involving a large number of sources. This paper addresses specifically the problem of information fusion of large number of information sources. Performance of Support Vector Machine (SVM) based approach is investigated in input spaces consisting of thousands of information sources. Microarray pattern recognition, an important bioinformatics task with significant medical diagnostics applications, is considered from the information and sensor data fusion viewpoint, and recognition performance experiments conducted on microarray data are discussed. An approach involving high-dimensional input space partitioning is presented and its efficacy is investigated. The aspects of feature-level and decision-level fusion are discussed as well. The results indicate the feasibility of the SVM based information fusion with large number of information sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Combat Air Identification Fusion Algorithm (CAIFA), developed by Daniel H. Wagner, Associates, is a prototype, inferential reasoning algorithm for air combat identification. Bayesian reasoning and updating techniques are used in CAIFA to fuse multi-source identification evidence to provide identity estimates-allegiance, nationality, platform type, and intent-of detected airborne objects in the air battle space, enabling positive and rapid Combat Identification (CID) decisions. CAIFA was developed for the Composite Combat Identification (CCID) project under the Office of Naval Research (ONR) Missile Defense (MD) Future Naval Capability (FNC) program. CAIFA processes identification (ID) attribute evidence generated by surveillance sensors and other information sources over time by updating the identity estimate for each target using Bayesian inference. CAIFA exploits the conditional interdependencies of attribute variables by constructing a context-dependent Bayesian Network (BN). This formulation offers a well-established, consistent approach for evidential reasoning, renders manageable the potentially large CID state space, and provides a flexible and extensible representation to accommodate requirements for model reconfiguration/restructuring. CAIFA enables reasoning across and at different levels of the Air Space Taxonomy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During design of classifier fusion tools, it is important to evaluate the performance of the fuser. In many cases, the output of the classifiers needs to be simulated to provide the range of fusion input that allows an evaluation throughout the design space. One fundamental question is how the output should be distributed, in particular for multi-class continuous output classifiers. Using the wrong distribution may lead to fusion tools that are either overly optimistic or otherwise distort the outcome. Either case may lead to a fuser that performs sub-optimal in practice. It is therefore imperative to establish the bounds of different classifier output distributions. In addition, one must take into account the design space that may be of considerable complexity. Exhaustively simulating the entire design space may be a lengthy undertaking. Therefore, the simulation has to be guided to populate the relevant areas of the design space. Finally, it is crucial to quantify the performance throughout the design of the fuser. This paper addresses these issues by introducing a simulator that allows the evaluation of different classifier distributions in combination with a design of experiment setup, and a built-in performance evaluation. We show results from an application of diagnostic decision fusion on aircraft engines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a new approach for distributed multiclass
classification using a hierarchical fusion architecture. Binary
decisions from local sensors, possibly in the presence of faults,
are fused locally. Locally fused results are forwarded to the
global fusion center that determines the final classification
result. Classification fusion in our approach is implemented via
error correcting codes to incorporate fault-tolerance capability.
This new approach not only provides an improved fault-tolerance
capability but also reduces bandwidth requirements as well as
computation time and memory requirements at the fusion center.
Numerical examples are provided to illustrate the performance of
this new approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach to fuse multiple images based on Dempster-Shafer evidential reasoning is proposed in this article. Dempster-Shafer theory provides a complete framework for combining weak evidences from multiple sources. Such situations typically arise in the image fusion problems, where a `real scene' image has to be estimated from incomplete and unreliable observations. By converting images from their spatial domain into the evidential representations, decisions are made to aggregate evidences such that a fused image is generated. The proposed fusion approach is evaluated on a broad set of images and promising results are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes integration methods to increase the level of automation in building reconstruction. Aerial imagery has been used as a major source in mapping fields and, in recent years, LIDAR data became popular as another type of mapping resources. Regarding to their performances, aerial imagery has abilities to delineate object boundaries but leaves many missing parts of boundaries during feature extraction. LIDAR data provide direct information about heights of object surfaces but have limitation for boundary localization. Efficient methods using complementary characteristics of two sensors are described to generate hypotheses of building boundaries and localize the object features. Tree structures for grid contours of LIDAR data are used for interpretation of contours. Buildings are recognized by analyzing the contour trees and modeled with surface patches with LIDAR data. Hypotheses of building models are generated as combination of wing models and verified by assessing the consistency between the corresponding data sets. Experiments using aerial imagery and laser data are presented. Our approach shows that the building boundaries are successfully recognized through our contour analysis approach and the inference from contours and our modeling method using wing model increase the level of automation in hypothesis generation/verification steps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This contribution presents a new fusion strategy to inspect specular surfaces. To cope with illumination problems, several images are recorded with different lighting. Typically, the information of interest is extracted from each image separately and is then combined at a decision level. However, in our approach all images are processed simultaneously by means of a centralized fusion-no matter whether the desired results are images, features or symbols. Since the information fused is closer to the source, a better exploitation of the raw data is achieved. The sensors are virtual in the sense that a single camera is employed to record all images with different illumination patterns. The fusion problem is formulated by means of an energy function. Its minimization yields the desired fusion results, which describe surface defects. The performance of the proposed methodology is illustrated by means of two case studies: the analysis of machined surfaces, and the inspection of painted free-form surfaces. The programmable light sources utilized are a DMD, and an LED based illumination device, respectively. In both cases, the results demonstrate that by generating complementary imaging situations and using fusion techniques, a reliable yet cost-efficient inspection is attained matching the needs of industry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signal-level image fusion has in recent years established itself as a useful tool for dealing with vast amounts of image data obtained by disparate sensors. In many modern multisensor systems, fusion algorithms significantly reduce the amount of raw data that needs to be presented or processed without loss of information content as well as provide an effective way of information integrations. One of the most useful and widespread applications of signal-level image fusion is for display purposes. Fused images provide the observer with a more reliable and more complete representation of the scene than would be obtained through single sensor display configurations. In recent years, a plethora of algorithms that deal with problem of fusion for display has been proposed. However, almost all are based on relatively basic processing techniques and do not consider information from higher levels of abstraction. As some recent studies have shown this does not always satisfy the complex demands of a human observer and a more subjectively meaningful approach is required. This paper presents a fusion framework based on the idea that subjectively relevant fusion could be achieved in information at higher levels of abstraction such as image edges and image segment boundaries are used to guide the basic signal-level fusion process. Fusion of processed, higher level information to form a blueprint for fusion at signal level and fusion of information from multiple levels of extraction into a single fusion engine are both considered. When tested on two conventional signal-level fusion methodologies, such multi-level fusion structures eliminated undesirable effects such as a fusion artifacts and loss of visually vital information that compromise their usefulness. Images produced by inclusion of multi-level information in the fusion process are clearer and of generally better quality than those obtained through conventional low-level fusion. This is verified through subjective evaluation and established objective fusion performance metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use our proposed discrete multi-resolution wavelet transform and grey system theory prediction to fuse sequence images to generate a high quality image. The fused images are simultaneously obtained via only one wavelet transform and the sequence of images. Several other methods were implemented to compare with the proposed approach. In fusion image, The sequence of images information can supplement each other, so the image fusion not only have abundant information, but also reserve the sequence of images detail. This experiment results also illuminates that image fusion is an important way to improve represent ability of image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the concepts of multisensor data fusion based on the Dempster-Shafer (DS) evidential theory in order to achieve a mine versus false-alarm (FA) classification of landmine targets. Initially, a decision-level DS algorithm is proposed to combine the evidence from multiple sensors of the landmine detection system developed by the General Dynamics Canada Limited (GD Canada). Subsequently, a feature-level DS fusion algorithm is employed to operate on a set of features reported by the ground penetrating radar (GPR) sensor of the system. The data used in the present study was acquired from the Aberdeen Proving Grounds (APG) test site in USA as part of the Ground Standoff Mine Detection System (GSTAMIDS) trials.
The proposed decision-level DS algorithm yielded a probability of detection (pd) of 92.53% at a false-alarm rate (FAR) value of 0.0697 FAs/m2. The Pd and the FAR performance results achieved by using the decision-level DS algorithm are comparable with the results obtained using three other decision-level fusion algorithms that were previously developed by GD Canada based on heuristic, Bayesian inference, and Voting fusion concepts. On the other hand, the feature-level DS fusion, when tested with the information presented by the GPR sensor only, resulted in a higher Pd value of 78.54% as compared to the corresponding result of 61.43% obtained by using the heuristic algorithm. The GPR sensor is one of the three scanning sensors present on the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In several practical applications of data fusion and more precisely in object identification problems, we need to combine imperfect information coming from different sources (sensors, humans, etc.), the resulting uncertainty being naturally of different kinds. In particular, one information could naturally been expressed by a membership function while the other could best be represented by a belief function. Usually, information modeled in the fuzzy sets formalism (by a membership function) concerns attributes like speed, length, or Radar Cross Section whose domains of definition are continuous. However, the object identification problem refers to a discrete and finite framework (the number of objects in the data base is finite and known). This implies thus a natural but unavoidable change of domain. To be able to respect the intrinsic characteristic of uncertainty arising from the different sources and fuse it in order to identify an object among a list of possible ones in the data base, we need (1) to use a unified framework where both fuzzy sets and belief functions can be expressed, (2) to respect the natural discretization of the membership function through the change of domain (from attribute domain to frame of discernment). In this paper, we propose to represent both fuzzy sets and belief function by random sets. While the link between belief functions and random sets is direct, transforming fuzzy sets into random sets involves the use of α-cuts for the construction of the focal elements. This transformation usually generates a large number of focal elements often unmanageable in a fusion process. We propose a way to reduce the number of focal elements based on some parameters like the desired number of focal elements, the acceptable distance from the approximated random set to the original discrete one, or the acceptable loss of information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The decision making systems make use of heterogeneous information to identify an object class or a target, which are affected by various kinds of imperfection. First, information issued from measures (radar measures, images) of an observation is represented by X variables. Generally, on these X variables, each class can be described through a probability distribution function. These decision systems also integrate expert a prior knowledge to assist the decision. Such information is defined by Y variables and is represented by fuzzy membership function. The question is how to combine appropriately these two kinds of data in order to improve the decision process.
In this paper, we present a decision model combining probabilistic and fuzzy data. The decision is defined using a fuzzy Bayesian approach, which takes into account these two imperfections. Only two classes are considered using one X variable and one Y variable. Then an extension is proposed to more complicated cases.
To validate the interest of this approach, we compare it with the Bayesian classification and fuzzy classification applied separately to synthetic data. In addition, we will see how our approach can be applied to the problem of radar system ranking, on which system resources are limited and as a consequence, decisions about priorities must be taken. Using the system information sources (i.e. probabilistic: radar measurements, fuzzy: prior expert knowledge, evidential), a comparison between Bayesian classification, fuzzy classification, system decision and the proposed approach is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern technology provides a great amount of information. In computer monitoring systems or computer control systems, especially real-time expert systems, in order to have the situation in hand, we need one or two parameters to express the quality and/or security of the whole system. This paper presents a principle for synthesizing measurements of multiple system parameters into a single parameter and its application to fuzzy pattern recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The emphasis of this paper is to design a Performance Evaluation Methodology for Data Fusion-based Multiple Target Tracking Systems. Within this methodology the Performance Evaluation process is treated as a whole new fusion process. This has two major advantages - (1) Facilitates reusability of existing models and algorithms, and (2) Using standard frameworks and norms makes it easier for the tracking community to easily adopt it - thus giving this aspect of tracking a highly needed jumpstart. A case study implementation of this design methodology is presented. Three different Track-Truth Association strategies were implemented to study the affect of Track-Truth Association strategies on the performance metrics. The Case Study results conclusively show that the selection of the Track-Truth Association strategy should be done with reference to the scenario characteristics, the “mission” goals and the performance metrics to be evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mutual-aided target tracking and target identification schemes are described by exploiting the couplings between the target tracking and target identification systems, which are typically implemented in a separate manner. A hybrid state space approach is formulated to deal with continuous-valued kinematics, discrete-valued target type, and discrete-valued target pose (inherently continuous but quantized). We identify and analyze ten possible mutual aiding mechanisms with different complexity in different levels. The coupled tracker design is illustrated within the context of JointSTARS using GMTI and HRRR measurements as well as digital terrain and elevation data (DTED) and road map among others. The resulting coupled tracking and identification system is expected to outperform the separately designed systems particularly during target maneuvers, for recovering from temporary data dropout, and in a dense target environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of a tracking algorithm is to associate data measured by one or more (moving) sensors to moving objects in the environment. The state of these objects that can be estimated with the tracking process depends on the type of data that is provided by these sensors. It is discussed how the tracking algorithm can adapt itself, depending on the provided data, to improve data association. The core of the tracking algorithm is an extended Kalman filter using multiple hypotheses for contact to track association. Examples of various sensor suites of radars, electro-optic sensors and acoustic sensors are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technique of data fusion is a new kind of data processing technique and has broad appliance foreground. Appearance of stealthy and disturbed targets in modern battlefield leads to data extraction and fusion by networking various sensors to maximize use of information to perform recognition, orientation and tracking of targets. As we known well, radar only acquire angle information without range information when encounter noise suppress jammer. Netted radars could get position information of interference by make use of radars position relatively and target angle information. Yet if there is more than one target, the orientation of targets could error due to orientation confuse. The paper present methods that through association arithmetic of kalman filter to avoid confuse.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a distributed estimation or tracking system, the global estimate are generated by the local estimates. Under the assumption of independence cross sensor noises, Bar-shalom proposed a two sensor tracking fusion formula, which used only the inverses of covariances of single sensor estimation errors. In this paper, we present more general multi-sensor estimation fusion formula, where the assumption of independence cross sensor noises and the direct computation of the generalized inverse of joint covariance of multiple sensor estimation errors are not necessary. Instead, as the number of sensors increases, a recursive algorithm is presented, in which only the inverses or the generalized inverses of the matrices having the same dimension as that of the covariance matrices of single sensor estimate errors are required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An increasing number and variety of platforms are now capable of
collecting remote sensing data over a particular scene. For many
applications, the information available from any individual sensor may
be incomplete, inconsistent or imprecise. However, other sources may
provide complementary and/or additional data. Thus, for an application
such as image feature extraction or classification, it may be that
fusing the mulitple data sources can lead to more consistent and
reliable results.
Unfortunately, with the increased complexity of the fused data, the
search space of feature-extraction or classification algorithms also
greatly increases. With a single data source, the determination of a
suitable algorithm may be a significant challenge for an image
analyst. With the fused data, the search for suitable algorithms can
go far beyond the capabilities of a human in a realistic time frame,
and becomes the realm of machine learning, where the computational
power of modern computers can be harnessed to the task at hand.
We describe experiments in which we investigate the ability of a suite
of automated feature extraction tools developed at Los Alamos National
Laboratory to make use of multiple data sources for various feature
extraction tasks. We compare and contrast this software's capabilities
on 1) individual data sets from different data sources 2) fused data
sets from multiple data sources and 3) fusion of results from multiple
individual data sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel approach to: 1) distinguish military vehicle groups, and 2) identify names of military vehicle convoys in the level-2 fusion process. The data is generated from a generic Ground Moving Target Indication (GMTI) simulator that utilizes Matlab and Microsoft Access. This data is processed to identify the convoys and number of vehicles in the convoy, using the minimum timed distance variance (MTDV) measurement. Once the vehicle groups are formed, convoy association is done using hypothesis techniques based upon Neyman Pearson (NP) criterion. One characteristic of NP is the low error probability when a-priori information is unknown. The NP approach was demonstrated with this advantage over a Bayesian technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new algorithm called “Adaptive Multimodal Biometric Fusion Algorithm”(AMBF), which is a combination
of Bayesian decision fusion and particle swarm optimization. A Bayesian framework is implemented to fuse decisions received
from multiple biometric sensors. The system’s accuracy improves for a subset of decision fusion rules. The optimal rule is a function of the
error cost and a priori probability of an intruder. This Bayesian framework formalizes the design of a system that can adaptively increase
or reduce the security level. Particle swarm optimization searches the decision and sensor operating points (i.e. thresholds) space to
achieve the desired security level. The optimization function aims to minimize the error in a Bayesian decision fusion. The particle swarm
optimization algorithm results in the fusion rule and the operating points of sensors at which the system can work. This algorithm is important
to systems designed with varying security needs and user access requirements. The adaptive algorithm is found to achieve desired
security level and switch between different rules and sensor operating points for varying needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general method for time delay of arrival (TDOA) estimation for time-frequency information fusion is analyzed. This technique, for which the generalized cross correlation method and histogram methods are special cases, results in a low TDOA estimation error and high efficiency in computation. The proposed method relies on a non-linear phase-error selector function, which acts as a reward and punish method for the phase error at each frequency. Three different selector function candidates, consisting of cosine, rectangular, and triangular functions are analyzed using simulations. In the presence of Gaussian noise, the rectangular selector function performs better than the cosine at signal-to-noise ratios (SNRs) higher than 10dB while for lower SNRs the cosine function performs better. With speech noise, the cosine function, which corresponds to the generalized cross correlation technique, has higher anomaly percentages and higher root-mean-square errors than the rectangular function. This suggests that in general, the rectangular selector function, which can be computed more easily than the cosine selector function, is superior technique to the generalized cross correlation method for real-time applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a multisensor surveillance system that consists of an optical sensor and an infrared sensor. In this system, a background subtraction method based on the zero-order statistics is presented for the moving object segmentation. Additionally, we propose an iterative method for multisensor video registration based on the robust Hausdorff distance. An efficient face detection system is shown as an application that will have enhanced performance based on the registration and fusion of the information from the two sensors. Experimental results show the efficacy of the proposed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique for representing data obtained from sensors, video streams imagery, sound, text, etc. is presented. The technique is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage where conventional mathematical models don’t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a’priori value for all derived rules. The rules are obtained from an exemplar data set, and are derived by a technique called factoring, and this results in a table of rules called a ruling. These rules can then be used in pattern recognition applications. These techniques are shown to be example as well as a more formal setting, and lastly these rules and ruling are likened to the structure both present and absent in the cerebellum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new approach to logic-based knowledge fusion is proposed. It is based on the use of (a form of) semaphores to solve conflicting information. It is shown that a traditional use of semaphores is not relevant in the case of an iterated fusion process. Accordingly, an adequate technique is thus proposed that allows multiple fusion steps to be performed. Technical properties of this new technique are then investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with recursively estimating the internal
state of a nonlinear dynamic system by processing noisy measurements and the known system input. In the case of continuous states, an exact analytic representation of the probability
density characterizing the estimate is generally too complex for
recursive estimation or even impossible to obtain. Hence, it is replaced by a convenient type of approximate density characterized by a finite set of parameters. Of course, parameters are desired that systematically minimize a given measure of deviation between
the (often unknown) exact density and its approximation, which in general leads to a complicated optimization problem. Here, a new framework for state estimation based on progressive processing is proposed. Rather than trying to solve the original problem, it is exactly converted into a corresponding system of explicit ordinary first-order differential equations. Solving this system over a finite "time" interval yields the desired optimal density
parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The interval estimation fusion method based on sensor interval estimates and their confidence degrees is developed. When sensor estimates are independent of each other, a combination rule to merge sensor estimates and their confidence is proposed. Moreover, two popular optimization criteria: minimizing interval length with an allowable minimum confidence degree, or maximizing confidence degree with an allowable maximum interval length are suggested. In terms of the two criteria, an optimal interval estimation fusion can be obtained based on the combined intervals and their confidence degrees. Then we can extend the results on the combined interval outputs and their confidence degrees to obtain a conditional combination rule and the corresponding optimal fault-tolerant interval estimation fusion in terms of the two criteria. It is easy to see that Marzullo’s fault-tolerant interval estimation fusion is a special case of our method. We also point out that in some sense, our combination rule is similar to the combination rule in Dempster-Shafer evidence theory. However, the confidence degrees given in this paper is summable, but they (called mass function in Dempster-Shafer evidence theory) are not there; therefore, Dempster-Shafer’s combination rule is not applicable to the interval estimation fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation, a key component in many Automatic Target Recognition (ATR) systems, has received considerable attention in the research community in recent years. A variety of segmentation approaches exist, and attempts have been made to combine various approaches in order to find more robust solutions. In this paper, the authors describe an inference fusion architecture for combining individual segmentation concepts which results in improved performance over the individual algorithms. We consider segmentation algorithms with several disparate cost functions as experts with a narrowly defined set of goals. The information obtained from each expert is combined and weighted with available evidence using an agent based inference system, resulting in an adaptive, robust and highly flexible image segmentation. Results obtained by applying this approach will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The revised JDL Fusion model Level 4 process refinement covers a broad spectrum of actions such as sensor management and control. A limitation of Level 4 is the purpose of control - whether it be for user needs or system operation. Level 5, User Refinement, is a modification to the Revised JDL model that distinguishes between machine process refinement and user refinement. User refinement can either be human control actions or refinement of the user's cognitive model. In many cases, fusion research concentrates on the machine and does not take full advantage of the human as not only a qualified expert to refine the fusion process, but also as customer for whom the fusion system is designed. Without user refinement, sensor fusion is incomplete, inadequate, and the user neglects its worthiness. To capture user capabilities, we explore the concept of user refinement through decision and action based on situational leadership models. We develop a Fuse-Act Situational User Refinement (FASUR) model that details four refinement behaviors: Neglect, Consult, Rely, and Interact and five refinement functions: Planning, Organizing, Coordinating, Directing, and Controlling. Process refinement varies for different systems and different user information needs. By designing a fusion system with a specific user in mind, vis Level 5, a fusion architecture can meet user's information needs for varying situations, extend user sensing capabilities for action, and increase the human-machine interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a case study of relation derivation within the context of situation awareness. First we present a scenario in which inputs are supplied by a simulated Level 1 system. The inputs are events annotated with terms from an ontology for situation awareness. This ontology contains concepts used to represent and reason about situations. The ontology and the annotations of events are represented in DAML and Rule-ML and then systematically translated to a formal method language called MetaSlang. Having all information expressed in a formal method language allows us to use a theorem prover, SNARK, to prove that a given relationship among the Level 1 objects holds (or that it does not hold). The paper shows a proof of concept that relation derivation in situation awareness can be done within a formal framework. It also identifies bottlenecks associated with this approach, such as the issue of the large number of potential relations that may have to be considered by the theorem prover. The paper discusses ways of resolving this as well as other problems identified in this study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a series of experiments in data fusion of remotely sensed multispectral satellite imagery, in-situ physical measurement data (temperature, pH, salinity), and implicitly encoded knowledge (contained in location and season) to predict values and classified levels of chlorophyll-a using an artificial neural net (ANN). ANNs inherently fuse data inputs and discover relationships to provide a fused interpretation of the inputs. The experiments investigated the effects of fusing data and knowledge from the three different types of sources: non-contact, physical contact, and implicit. The results indicate that fusing the three source types improved prediction of chlorophyll-a values and classification levels, and that the multisource ANN fusion approach might improve or augment present periodic sample point monitoring methods for chlorophyll-a.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In Intensive Care Unit (ICU) patient’s physiological variables such as heart rate, blood pressure, temperature, ventilation and brain activity are constantly monitored on-line, and medicine doses are given to ensure that these variables remain within desired limits. Such data are vital not only for on-line but also for off-line analyses and for medical staff training. Furthermore, in cases of deceased patients it is very important to retrieve these data in order to investigate the causes of deaths. This paper is introducing a design of a Real-Time Data Acquisition System for monitoring patients in Intensive Care Unit (ICU). The proposed design is implemented on a standard personal computer (PC) and operating system without using any sophisticated hardware or interface devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the beamforming separation effectiveness of several different microphone array configurations. These configurations consist of 4, 8, and 16 microphones placed in a plane for two-dimensional speech separation, with inter-microphone distances varying from 10cm to 160cm in logarithmic step sizes. It is discovered that the linear and bi-linear arrays result in the largest signal-to-noise ratio (SNR) gain after delay-and-sum beamforming in the case of speech signal and speech noise. Also, simulations show that larger inter-microphone distances result in a higher SNR gain, although the practicality of the higher inter-microphone distances is limited for certain applications where the array size is constrained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper explores the benefits of including time boundary information in Hidden Markov Model based speech recognition systems. Traditional systems normally feed the parameterized data into the HMM recognizer, which result in relatively complicated models and computationally expensive search steps. We propose a few methods of detecting time boundaries prior to parameterization, and present a novel way of including this additional information in the recognizer. The result is significant simplification in the model prototypes, higher accuracy and faster performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last several years, the Naval Research Laboratory has developed video based systems for inspecting tanks (ballast, potable water, fuel, etc.) and other voids on ships. Over this past year, we have extensively utilized the Insertable Stalk Inspection System (ISIS) to perform inspections of shipboard tanks and voids. This system collects between 15 and 30 images of the tank or void being inspected as well as a video archive of the complete inspection process. A corrosion detection algorithm analyzes the collected imagery. The corrosion detection algorithm output is the percent coatings damage in the tank being inspected. The corrosion detection algorithm consists of four independent algorithms that each separately assesses the coatings damage in each of the images that are analyzed. The algorithm results are fused to attain a single coatings damage value for each of the analyzed images. The damage values for each of the images are next aggregated in order to develop a single coatings damage value for the tank being inspected.
This paper concentrates on the methods used to fuse the results from the four independent algorithms that assess corrosion damage at the individual image level as well as the methods used to aggregate the results from multiple images to attain a single coatings damage level. Results from both calibration tests and double blind testing are provided in the paper to demonstrate the advantages of the video inspection systems and the corrosion detection algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The process of implementing a damage detection strategy for engineering systems is often referred to as structural health monitoring (SHM). And Structural Health Monitoring is very important for large structures like suspension- and cable-stayed bridges, towers, offshore platforms and so on. Some advance technologies for infrastructure health monitoring have been caused much more attentions, in which the wireless sensor network (WSN) is recently received special interests. The WSN would have lower capital and installation costs as well as ensure more reliability in the communication of sensor measurements. However, in the context of untethered nodes, the finite energy budget is a primary design constraint. Therefore, one wants to process data as much as possible inside the network to reduce the number of bits transmitted, particularly over longer distances.
In this paper, a WSN is proposed for health monitoring of the offshore platform, and a laboratory prototype was designed and developed to demonstrate the feasibility and validity of the proposed WSN. In the laboratory prototype, wireless sensor nodes were deployed on a model of offshore platform, a Wireless Sensor Local Area Network (WSLAN) transfers the simulated data among personal computer and microsensor nodes peripherals without cables. To minimize the energy consumption, algorithms for fusing the acceleration, temp and magnetic sensors on a single node are being developed. And based on fusing the data from local nodes, the current state of structure health was determined.
In our deployment, we using UC Berkeley motes as the wireless sensor nodes to examine many of the issues relating to their usage and our information fusion algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pilot on-board a combat aircraft encounters during any mission a dynamically varying threat environment of diverse EO and RF threats. Different sensors are carried on-board the aircraft to combat these threats. However, these sensors have their own limitations and no single sensor is able to perform in all kinds of situations. In addition, the technological advances in the threat scenario - in terms of higher speeds, small signatures and multimode guidance - and increased complex threats in the battlefield leading to generation of large amount of data input to the pilot make his decision making task very difficult due to increased workload. These challenges can be efficiently handled by deployment of a system on-board the aircraft with a comprehensive goal of autonomous target detection and tracking, situation and threat assessment and decision making based on multi-sensor data fusion techniques. In this paper, major emerging EO and RF threats for a combat aircraft and some important EO and RF sensors on-board the aircraft have been discussed. A design approach for the development of a multi-sensor data fusion system for a combat aircraft to provide better threat assessment than that provided by any single stand alone sensor has also been presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a simple physical test-bed, developed to allow practical experimentation in the use of Decentralised Data Function (DDF) in sensor-to-shooter applications. Running DDF over an ad hoc network of distributed sensors produces target location information. This is used to guide a Leica laser-tracker system to designate currently tracked targets. We outline how the system is robust to network and/or node failure. Moreover, we discuss the system properties that lead to it being completely “plug-and-play”, as, like the distributed sensor nodes, the “shooter” does not require knowledge of the overall network topology and can connect at any point.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Improved situational awareness is a primary goal for the Objective Force. Knowing where the enemy is and what are the threats to his troops provides the commander with the information he needs to plan his mission and provide his forces with maximum protection from the variety of threats that are present on the battlefield.
Sensors play an important role in providing critical information to enhance situational awareness. The sensors that are used on the battlefield include, among others, seismic, acoustic, and cameras in different spectral ranges of the electro-magnetic spectrum. These sensors help track enemy movement and serve as part of an intrusion detection system. Characteristically these sensors are relatively cheap and easy to deploy.
Chemical and biological agent detection is currently relegated to sensors that are specifically designed to detect these agents. Many of these sensors are collocated with the troops. By the time alarm is sounded the troops have already been exposed to the agent. In addition, battlefield contaminants frequently interfere with the performance of these sensors and result in false alarms. Since operating in a contaminated environment requires the troops to don protective garments that interfere with their performance we need to reduce false alarms to an absolute minimum.
The Edgewood Chemical and Biological Center (ECBC) is currently conducting a study to examine the possibility of detecting chemical and biological weapons as soon as they are deployed. For that purpose we conducted a field test in which the acoustic, seismic and electro-magnetic signatures of conventional and simulated chemical / biological artillery 155mm artillery shells were recorded by an array of corresponding sensors. Initial examination of the data shows a distinct differences in the signatures of these weapons.
In this paper we will provide detailed description of the test procedures. We will describe the various sensors used and describe the differences in the signatures generated by the conventional and the (simulated) chemical rounds. This paper will be followed by other papers that will provide more details information gained by the various sensors and describe how fusing the data enhance the reliability of the CB detection process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The threat of chemical and biological weapons is a serious problem and the ability to determine if an incoming artillery round contains high explosives or a chemical/biological agent is valuable information to anyone on the battlefield. Early detection of a chemical or biological agent provides the soldier with more time to respond to the threat. Information about the round type and location can be obtained from acoustic and seismic sensors and fused with information from radars, infrared and video cameras, and meteorological sensors to identify the round type quickly after detonation. This paper will describe the work with ground based acoustic and seismic sensors to discriminate between round types in a program sponsored by the Soldier Biological and Chemical Command.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In support of the Disparate Sensor Integration (DSI) Program a number of imaging sensors were fielded to determine the feasibility of using information from these systems to discriminate between chemical simulant and high explosives munitions. The imaging systems recorded video from 160 training and 100 blind munitions detonation events. Two types of munitions were used; 155 mm high explosives rounds and 155 mm chemical simulant rounds. In addition two different modes of detonation were used with these two classes of munitions; detonation on impact (point detonation) and detonation prior to impact (airblasts). The imaging sensors fielded included two visible wavelength cameras, a near infrared camera, a mid wavelength infrared camera system and a long wavelength infrared camera system.
Our work to date has concentrated on using the data from one of the visible wavelength camera systems and the long wavelength infrared camera system. The results provided in this paper clearly show the potential for discriminating between the two types of munitions and the two detonation modes using these camera data. It is expected that improved classification robustness will be achieved when the camera data described in this paper is combined with results and discriminating features generated from some of the other camera systems as well as the acoustic and seismic sensors also fielded in support of the DSI Program.
The paper will provide a brief description of the camera systems and provide still imagery that show the four classes of explosives events at the same point in the munitions detonation sequence in both the visible and long wavelength infrared camera data. Next the methods used to identify frames of interest from the overall video sequence will be described in detail. This will be followed by descriptions of the features that are extracted from the frames of interest. A description of the system that is currently used for performing classification with the extracted features and the results attained on the blind test data set are next described. The work performed to date to fuse information from the visible and long wavelength infrared imaging sensors including the benefits realized are next described. The paper concludes with a description of our ongoing work to fuse imaging sensor data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we provide a concurrency control and recovery (CCR) mechanism over cached LDAP objects. An LDAP server can be directly queried using system calls to retrieve data. Existing LDAP implementations do not provide CCR mechanisms. In such cases, it is up to the application to verify that accesses remain serialized. Our mechanism provides an independent layer over an existing LDAP server (Sun One Directory Server), which handles all user requests, serializes them based on 2 Phase Locking and Timestamp Ordering mechanisms and provides XML-based logging for recovery management. Furthermore, while current LDAP servers only provide object-level locking, our scheme serializes transactions on individual attributes of LDAP objects (attribute-level locking). We have developed a Directory Enabled Network (DEN) Simulator that operates on a subset of directory objects on an existing LDAP server to test the proposed mechanism. We perform experiments to show that our mechanism can gracefully address concurrency and recovery related issues over and LDAP server.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Sensor Management Team from DSO National Laboratories has developed a Decision Support System (DSS) to assist human operators in determining the most effective employment and/or deployment of a suite of sensors given a particular mission or operational scenario. The key issue addressed by the system is the resource allocation problem accompanied by two contradictory objectives, namely to maximize combined coverage of the sensor suite and to maximize survivability of the sensor within the suite. Furthermore, the system is to handle operational constraints on the usage of the suite of sensors. In this paper, we will describe how we handle the problem by formulating it as a Multiple Objective Optimization (MOO) problem. This system may be used as a pre-mission planning tool or a real time decision aid for the sensor suite commander. With the increase in size of the sensor suite and the number of possible deployment sites, the feasibility space of the employment/deployment configurations will grow tremendously. In order to allow for near real time decision support, the team has incorporated genetic algorithm to solve the MOO problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The talk describes the development of basic technologies of intelligent systems fusing data from multiple domains and leading to automated computational techniques for understanding data contents. Understanding involves inferring appropriate decisions and recommending proper actions, which in turn requires fusion of data and knowledge about objects, situations, and actions. Data might include sensory data, verbal reports, intelligence intercepts, or public records, whereas knowledge ought to encompass the whole range of objects, situations, people and their behavior, and knowledge of languages. In the past, a fundamental difficulty in combining knowledge with data was the combinatorial complexity of computations, too many combinations of data and knowledge pieces had to be evaluated. Recent progress in understanding of natural intelligent systems, including the human mind, leads to the development of neurophysiologically motivated architectures for solving these challenging problems, in particular the role of emotional neural signals in overcoming combinatorial complexity of old logic-based approaches.
Whereas past approaches based on logic tended to identify logic with language and thinking, recent studies in cognitive linguistics have led to appreciation of more complicated nature of linguistic models. Little is known about the details of the brain mechanisms integrating language and thinking. Understanding and fusion of linguistic information with sensory data represent a novel challenging aspect of the development of integrated fusion systems. The presentation will describe a non-combinatorial approach to this problem and outline techniques that can be used for fusing diverse and uncertain knowledge with sensory and linguistic data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.