Paper
9 August 1988 Pixel-Level Sensor Fusion For Improved Object Recognition
Greg Duane
Author Affiliations +
Abstract
A method is proposed to exploit simultaneous, co-registered FLIR and TV images of isolated objects against relatively bland backgrounds to improve recognition of those objects. The method uses edges extracted from the TV imagery to segment objects in the FLIR imagery. A binary tree classifier is shown to perform significantly better with objects defined in this manner than with objects extracted separately from the FLIR or TV images, or with a feature level fusion scheme which combines features of separately extracted objects. The structure of the tree indicates that the cross-segmented objects are simply ordered in feature space. An argument is presented that this sensor fusion scheme is natural in terms of the known or-ganization of neural vision systems. Generalizations to other sensor types and fusion schemes should be considered, since it has been shown that co-registered imagery can be exploited to improve recognition at no additional computational cost.
© (1988) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Greg Duane "Pixel-Level Sensor Fusion For Improved Object Recognition", Proc. SPIE 0931, Sensor Fusion, (9 August 1988); https://doi.org/10.1117/12.946666
Lens.org Logo
CITATIONS
Cited by 19 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Image fusion

Forward looking infrared

Feature extraction

Sensor fusion

Sensors

Object recognition

Back to Top