Multispectral imaging is an attractive sensing modality for small unmanned aerial vehicles (UAVs) in numerous military and civilian applications such as reconnaissance, target detection, and precision agriculture. Cameras based on patterned filters in the focal plane, such as conventional colour cameras, represent the most compact architecture for spectral imaging, but image reconstruction becomes challenging at higher band counts. We consider a camera configuration where six bandpass filters are arranged in a periodically repeating pattern in the focal plane. In addition, a large unfiltered region permits conventional monochromatic video imaging that can be used for situational awareness (SA), including estimating the camera motion and the 3D structure of the ground surface. By platform movement, the filters are scanned over the scene, capturing an irregular pattern of spectral samples of the ground surface. Through estimation of the camera trajectory and 3D scene structure, it is still possible to assemble a spectral image by fusing all measurements in software. The repeated sampling of bands enables spectral consistency testing, which can improve spectral integrity significantly. The result is a truly multimodal camera sensor system able to produce a range of image products. Here, we investigate its application in tactical reconnaissance by pushing towards on-board real-time spectral reconstruction based on visual odometry (VO) and full 3D reconstruction of the scene. The results are compared with offline processing based on estimates from visual simultaneous localisation and mapping (VSLAM) and indicate that the multimodal sensing concept has a clear potential for use in tactical reconnaissance scenarios.
We propose a method for jointly estimating intrinsic calibration and internal clock synchronisation for a pantilt- zoom (PTZ) camera using only data that can be acquired in the field during normal operation. Results show that this method is a promising starting point towards using software to replace costly timing hardware in such cameras. Through experiments we provide calibration and clock synchronisation for an off-the-shelf low-cost PTZ camera, and observe a greatly improved directional accuracy, even during mild manoeuvres.
Cameras with filters in the focal plane provide the most compact solution for multispectral imaging. A small UAV can carry multiple such cameras, providing large area coverage rate at high spatial resolution. We investigate a camera concept where a patterned bandpass filter with six bands provides multiple interspersed recordings of all bands, enabling consistency checks for improved spectral integrity. A compact sensor payload has been built with multiple cameras and a data acquisition computer. Recorded imagery demonstrates the potential for large area coverage with good spectral integrity.
In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact
spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been
proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As
the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with
varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object
varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design
where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral
distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by
each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times
so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is
small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion
tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point
spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction.
A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results.
Elimination of spectral artifacts due to scene motion is demonstrated.
Seven countries within the European Defence Agency (EDA) framework are joining effort in a four year project (2009-2013) on Detection in Urban scenario using Combined Airborne imaging Sensors (DUCAS). Data has been collected in a joint field trial including instrumentation for 3D mapping, hyperspectral and high resolution imagery together with in situ instrumentation for target, background and atmospheric characterization. Extensive analysis with respect to detection and classification has been performed. Progress in performance has been shown using combinations of hyperspectral and high spatial resolution sensors.
A research platform with four cameras in the infrared and visible spectral domains is under development at the Norwegian Defence Research Establishment (FFI). The platform will be mounted on a high-speed jet aircraft and will primarily be used for image acquisition and for development and test of automatic target recognition (ATR) algorithms. The sensors on board produce large amounts of data, the algorithms can be computationally intensive and the data processing is complex. This puts great demands on the system architecture; it has to run in real-time and at the same time be suitable for algorithm development. In this paper we present an architecture for ATR systems that is designed to be exible, generic and efficient.
The architecture is module based so that certain parts, e.g. specific ATR algorithms, can be exchanged without affecting the rest of the system. The modules are generic and can be used in various ATR system configurations. A software framework in C++ that handles large data ows in non-linear pipelines is used for implementation. The framework exploits several levels of parallelism and lets the hardware processing capacity be fully utilised. The ATR system is under development and has reached a first level that can be used for segmentation algorithm development and testing. The implemented system consists of several modules, and although their content is still limited, the segmentation module includes two different segmentation algorithms that can be easily exchanged. We demonstrate the system by applying the two segmentation algorithms to infrared images from sea trial recordings.
KEYWORDS: Cameras, Monte Carlo methods, Point spread functions, Image fusion, Ray tracing, Sensors, Image processing, Imaging systems, Data fusion, Image resolution
Images from airborne cameras can be a valuable resource for data fusion, but this typically requires them to be georeferenced. This usually implies that the information of every pixel should be accompanied by a single geographical position describing where the center of the pixel is located in the scene. This geospatial information is well suited for tasks like target positioning and orthorectification. But when it comes to fusion, a detailed description of the area on the ground contributing to the pixel signal would be preferable over a single position. In this paper we present a method for estimating these regions. Simple Monte Carlo simulations are used to combine the influences of the main geometrical aspects of the imaging process, such as the point spread function, the camera’s motion and the topography in the scene. Since estimates of the camera motion are uncertain to some degree, this is incorporated in the simulations as well. For every simulation, a pixel’s sampling point in the scene is estimated by intersecting a randomly sampled line of sight with a 3D-model of the scene. Based on the results of numerous simulations, the pixel’s sampling region can be represented by a suitable probability distribution. This will be referred to as the pixel’s footprint distribution (PFD). We present results for high resolution hyperspectral pushbroom images of an urban scene.
The work presented in this paper is based on a dataset recorded with an airborne sensor. It comprises targets like M-60,
M-47, M-113, bridge layers, tank retrievers, and trucks in various types of scenes.
The background-object segmentation consists of first estimating the ground level everywhere in the scene, and then for
each sample simply subtracting the measured height and ground level height. No assumptions concerning flat terrain etc.
are made.
Samples with height above ground level higher than a certain threshold are clustered by utilizing a straightforward
agglomerative clustering algorithm. Around each cluster the bounding box with minimum volume is determined. Based
on these bounding boxes, too small as well as too large clusters can easily be removed.
However, vehicle-sized clutter will not be removed. Clutter detection is based on estimating the normal vector for a
plane approximation around each sample. This approach is based on the fact that the surface normals of a vehicle is more
“modulo 90°” distributed than clutter.
The aim of the classification has been to classify main battle tanks (MBTs) Two types of algorithms have been studied,
one based on Dempster Shafer fusion theory, and one model based.
Our dataset comprises clusters of 269 vehicles (among them 131 MBTs), and 253 clutter objects (i.e. in practice vehiclesized
bushes). The experiments we have carried out show that the segmentation extracts all vehicles, the clutter detection
removes 90% of the clutter, and the classification finds more than 95% of the MBTs as well as removes half of the
remaining clutter.
The EDA project "Detection in Urban scenario using Combined Airborne imaging Sensors" (DUCAS) is in progress.
The aim of the project is to investigate the potential benefit of combined high spatial and spectral resolution airborne
imagery for several defense applications in the urban area. The project is taking advantage of the combined resources
from 7 contributing nations within the EDA framework. An extensive field trial has been carried out in the city of
Zeebrugge at the Belgian coast in June 2011. The Belgian armed forces contributed with platforms, weapons, personnel
(soldiers) and logistics for the trial. Ground truth measurements with respect to geometrical characteristics, optical
material properties and weather conditions were obtained in addition to hyperspectral, multispectral and high resolution
spatial imagery.
High spectral/spatial resolution sensor data are used for detection, classification, identification and tracking.
The paper describes the georeferencing part of an airborne hyperspectral imaging system based on pushbroom scanning.
Using ray-tracing methods from computer graphics and a highly efficient representation of the digital elevation model
(DEM), georeferencing of high resolution pushbroom images runs in real time by a large margin. By adapting the
georeferencing to match the DEM resolution, the camera field of view and the flight altitude, the method has potential to
provide real time georeferencing, even for HD video on a high resolution DEM when a graphics processing unit (GPU)
is used for processing.
An airborne system for hyperspectral target detection is described. The main sensor is a HySpex pushbroom
hyperspectral imager for the visible and near-infrared spectral range with 1600 pixels across track, supplemented by a
panchromatic line imager. An optional third sensor can be added, either a SWIR hyperspectral camera or a thermal
camera. In real time, the system performs radiometric calibration and georeferencing of the images, followed by image
processing for target detection and visualization. The current version of the system implements only spectral anomaly
detection, based on normal mixture models. Image processing runs on a PC with a multicore Intel processor and an
Nvidia graphics processing unit (GPU). The processing runs in a software framework optimized for large sustained data
rates. The platform is a Cessna 172 aircraft based close to FFI, modified with a camera port in the floor.
We have performed a field trial to evaluate technologies for stand-off detection of biological aerosols, both in daytime
and at night. Several lidar (light detection and ranging) systems were tested in parallel. We present the results from three
different lidar systems; one system for detection and localization of aerosol clouds using elastic backscattering at
1.57 μm, and two systems for detection and classification of aerosol using spectral detection of ultraviolet laser-induced
fluorescence (UV LIF) excited at 355 nm. The UV lidar systems were utilizing different technologies for the spectral
detection, a photomultiplier tube (PMT) array and an intensified charge-coupled device (ICCD), respectively. During the
first week of the field trial, the lidar systems were measuring towards a semi-closed chamber at a distance of 230 m. The
chamber was built from two docked standard 20-feet containers with air curtains in the short sides to contain the aerosol
inside the chamber. Aerosol was generated inside the semi-closed chamber and monitored by reference equipments, e.g.
slit sampler and particle counters. Signatures from several biological warfare agent simulants and interferents were
measured at different aerosol concentrations. During the second week the aerosol was released in the air and the
reference equipments were located in the centre of the test site. The lidar systems were measuring towards the test site
centre at distances of either 230 m or approximately 1 km. In this paper we are presenting results and some preliminary
signal processing for discrimination between different types of simulants and interference aerosols.
The paper outlines a new method for band selection derived from a multivariate normal mixture anomaly detection
method. The method consists in evaluating detection performance in terms of false alarm rates for all band
configurations obtainable from an input image by selecting and combining bands according to selection criteria
reflecting sensor physics. We apply the method to a set of hyperspectral images in the visible and near-infrared spectral
domain spanning a range of targets, backgrounds and measurement conditions. We find optimum bands, and investigate
the feasibility of defining a common band set for a range of scenarios. The results suggest that near optimal performance
can be obtained using general configurations with less than 10 bands. This may have implications for the choice of
sensor technology in target detection applications. The study is based on images with high spectral and spatial resolution
from the HySpex hyperspectral sensor.
KEYWORDS: Target detection, Atmospheric modeling, Performance modeling, Detection and tracking algorithms, RGB color model, Sensors, Signal to noise ratio, Hyperspectral imaging, Reflectivity, Data modeling
We study material identification in a forest scene under strongly varying illumination conditions, ranging from open sunlit conditions to shaded conditions between dense tree-lines. The algorithm used is a physical subspace model, where the pixel spectrum is modelled by a subspace of physically predicted radiance spectra. We show that a pure sunlight and skylight model is not sufficient to detect shaded targets. However, by expanding the model to also represent reflected light from the surrounding vegetation, the performance of the algorithm is improved significantly. We also show that a model based on a standardized set of simulated conditions gives results equivalent to those obtained from a model based on measured ground truth spectra. Detection performance is characterized as a function of subspace dimensionality, and we find an optimum at around four dimensions. This result is consistent with what is expected from the signal-to-noise ratio in the data set. The imagery used was recorded using a new hyperspectral sensor, the Airborne Spectral Imager (ASI). The present data were obtained using the visible and near-infrared module of ASI, covering the 0.4-1.0 μm region with 160 bands. The spatial resolution is about 0.2 mrad so that the studied targets are resolved into pure pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.