A proposed framework using spectral and spatial information is introduced for neural net multisensor data fusion. This consists of a set of independent-sensor neural nets, one for each sensor (type of data), coupled to a fusion net. The neural net of each sensor is trained from a representative data set of the particular sensor to map to a hypothesis space output. The decision outputs from the sensor nets are used to train the fusion net to an overall decision. During the initial processing, three-dimensional (3-D) point cloud data (PCD) are segmented using a multidimensional mean-shift algorithm into clustered objects. Concurrently, multiband spectral imagery data (multispectral or hyperspectral) are spectrally segmented by the stochastic expectation–maximization into a cluster map containing (spectral-based) pixel classes. For the proposed sensor fusion, spatial detections and spectral detections complement each other. They are fused into final detections by a cascaded neural network, which consists of two levels of neural nets. The success of the approach in utilizing sensor synergism for an enhanced classification is demonstrated for the specific case of classifying hyperspectral imagery and PCD extracted from LIDAR, obtained from an airborne data collection over the campus of University of Southern Mississippi, Gulfport, Mississippi.
Target recognition and classification in a 3D point cloud is a non-trivial process due to the nature of the data collected from a sensor system. The signal can be corrupted by noise from the environment, electronic system, A/D converter, etc. Therefore, an adaptive system with a desired tolerance is required to perform classification and recognition optimally. The feature-based pattern recognition algorithm architecture as described below is particularly devised for solving a single-sensor classification non-parametrically. Feature set is extracted from an input point cloud, normalized, and classifier a neural network classifier. For instance, automatic target recognition in an urban area would require different feature sets from one in a dense foliage area.
The figure above (see manuscript) illustrates the architecture of the feature based adaptive signature extraction of 3D point cloud including LIDAR, RADAR, and electro-optical data. This network takes a 3D cluster and classifies it into a specific class. The algorithm is a supervised and adaptive classifier with two modes: the training mode and the performing mode. For the training mode, a number of novel patterns are selected from actual or artificial data. A particular 3D cluster is input to the network as shown above for the decision class output. The network consists of three sequential functional modules. The first module is for feature extraction that extracts the input cluster into a set of singular value features or feature vector. Then the feature vector is input into the feature normalization module to normalize and balance it before being fed to the neural net classifier for the classification. The neural net can be trained by actual or artificial novel data until each trained output reaches the declared output within the defined tolerance. In case new novel data is added after the neural net has been learned, the training is then resumed until the neural net has incrementally learned with the new novel data. The associative memory capability of the neural net enables the incremental learning. The back propagation algorithm or support vector machine can be utilized for the classification and recognition.
Architecture for neural net multi-sensor data fusion is introduced and analyzed. This architecture consists of a set of
independent sensor neural nets, one for each sensor, coupled to a fusion net. The neural net of each sensor is trained
(from a representative data set of the particular sensor) to map to a hypothesis space output. The decision outputs from
the sensor nets are used to train the fusion net to an overall decision. To begin the processing, the 3D point cloud LIDAR
data is classified based on a multi-dimensional mean-shift segmentation and classification into clustered objects.
Similarly, the multi-band HSI data is spectrally classified by the Stochastic Expectation-Maximization (SEM) into a
classification map containing pixel classes. For sensor fusion, spatial detections and spectral detections complement each
other. They are fused into final detections by a cascaded neural network, which consists of two levels of neural nets. The
first layer is the sensor level consisting of two neural nets: spatial neural net and spectral neural net. The second level
consists of a single neural net, that is the fusion neural net. The success of the system in utilizing sensor synergism for an
enhanced classification is clearly demonstrated by applying this architecture for classifying on November 2010 airborne
data collection of LIDAR and HSI over the Gulfport, MS, area.
Performance measures are derived for data-adaptive hypothesis testing by systems trained on stochastic data. The measures consist of the averaged performance of the system over the ensemble of training sets. The training set-based measures are contrasted with maximum aposteriori probability (MAP) test measures. It is shown that the training set-based and MAP test probabilities are equal if the training set is proportioned according to the prior probabilities of the hypotheses. Applications of training set-based measures are suggested for neural net and training set design.
An intuitive architecture for neural net multisensor data fusion consists of a set of independent sensor neural nets, one for each sensor, coupled to a fusion net. Each sensor is trained from a representative data set of the particular sensor to map to an hypothesis space output. The decision outputs from the sensor nets are used to train the fusion net to an overall decision. In this paper the sensor fusion architecture is applied to an experiment involving the multisensor observation of object deployments during the recent Firefly launches. The deployments were measured simultaneously by X-band, CO2 laser, and L-band radars. The range-Doppler images from the X-band and CO2 laser radars were combined with a passive IR spectral simulation of the deployment to form the data inputs to the neural sensor fusion system. The network was trained to distinguish predeployment, deployment, and postdeployment phases of the launch based on the fusion of these sensors. The success of the system is utilizing sensor synergism for an enhanced deployment detection is clearly demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.