Majority of the unattended ground sensors (UGS) use both acoustic and seismic sensors to detect various targets, namely, people, vehicles, etc. The UGS once deployed should for several months before changing the batteries. This implies use of fewer sensors may result in longer UGS life. Towards this goal, we are exploring the possibility of buried microphone to perform the task of both seismic and microphone functions. In this paper, we analyze the performance of a buried microphone to detect voice and footsteps.
We conducted an experiment to correlate the information gathered by a suite of hard sensors with the information on social networks such as Twitter, Facebook, etc. The experiment consisting of monitoring traffic on a well- traveled road and on a road inside a facility. The sensors suite selected mainly consists of sensors that require low power for operation and last a longtime. The output of each sensor is analyzed to classify the targets as ground vehicles, humans, and airborne targets. The algorithm is also used to count the number of targets belonging to each type so the sensor can store the information for anomaly detection. In this paper, we describe the classifier algorithms used for acoustic, seismic, and passive infrared (PIR) sensor data.
Passive infrared (PIR) sensors are widely used as a part of unattended ground sensor suite for situational awareness. Currently, the PIR sensor is mainly used as a wakeup sensor for the imaging sensor in order to conserve power. Since the PIR sensor mainly responds to the thermal radiation from the target, animals in the vicinity of the sensor can cause many false alarms. The number of false alarms can be cut drastically, if the target’s size can be estimated and a decision is made based on target size. For example, if the target is 5 ft 9 in tall and 1.5 ft wide, it is most likely a human being as opposed to an animal. In this paper, we present a technique to estimate target size using two PIR sensors with Fresnel lens arrays. One of the PIR sensors is mounted such that its Fresnel zones are horizontal to the ground, and the second PIR sensor is mounted such that its Fresnel zones are at a slant angle to the horizontal plane. The former is used to estimate the width/length, while the latter is used to estimate the height of the target. The relative signal strength between the two sensors is used to estimate the distance of the target from the sensor. The time it takes to cross the Fresnel zones is used to estimate the speed of the target. The algorithm is tested using the data collected in the woods, where several animals are observed roaming.
Cadence analysis has been the main focus for discriminating between the seismic signatures of people and animals.
However, cadence analysis fails when multiple targets are generating the signatures. We analyze the mechanism
of human walking and the signature generated by a human walker, and compare it with the signature generated
by a quadruped. We develop Fourier-based analysis to differentiate the human signatures from the animal
signatures. We extract a set of basis vectors to represent the human and animal signatures using non-negative
matrix factorization, and use them to separate and classify both the targets. Grazing animals such as deer, cows,
etc., often produce sporadic signals as they move around from patch to patch of grass and one must characterize
them so as to differentiate their signatures from signatures generated by a horse steadily walking along a path.
These differences in the signatures are used in developing a robust algorithm to distinguish the signatures of
animals from humans. The algorithm is tested on real data collected in a remote area.
KEYWORDS: Doppler effect, Sensors, Ultrasonics, Matrices, Signal to noise ratio, Data analysis, Analytical research, Signal generators, Seismic sensors, Magnetic sensors
In this paper we analyze the seismic signals generated by people and animals walking. It is known that when
a person walks, the heel strikes first and then the front of the foot; whereas animals walk on their hoofs. This
difference in the walking patterns result in significant changes in the seismic signatures for both people and
animals. Similarly, men walk differently than women and they also have different weight distributions resulting
in different signatures for men and women. They also have different cadence or gait patterns. We distinguish the
significant features in seismic signatures to distinguish people and animals. Ultrasonic Doppler returns capture
the variations in the gait. The Doppler returns will be analyzed to distinguish people and animals. Algorithms to
classify the signatures will be provided. The algorithms will be tested on the data collected at a horse farm with
women, men and people walking. The results will be discussed along with possible future research directions to
reduce the number of false alarms.
KEYWORDS: Sensors, Ultrasonics, Doppler effect, Acoustics, Algorithm development, Signal detection, Signal generators, Electric field sensors, Detection and tracking algorithms, Data analysis
In this paper, we address the issues involved in detecting and classifying people walking and jogging/running.
When the people are walking, sensors observe the signals for a longer period compared to the case in which
people are jogging. To identify fast-moving people, one must make the decision based on the few telltale signals
generated by a person jogging: a higher impact of a foot on the ground, which can be monitored by seismic
sensors; the panting noise observed through an acoustic sensor; or a higher Doppler from an ultrasonic sensor,
to name few. First, we investigate the phenomenology associated with seismic signals generated by a person
walking and jogging. Then, we analyze ultrasonic signatures to distinguish the characteristics associated with
them. Finally, we develop the algorithms to detect and classify people walking and jogging. These algorithms
are tested on data collected in an outdoor environment.
KEYWORDS: Sensors, Sensor networks, Video, Video surveillance, Data fusion, Infrared radiation, Surveillance, Acoustics, Information fusion, Video processing
Data fusion plays a major role in assisting decision makers by providing them with an improved situational
awareness so that informed decisions could be made about the events that occur in the field. This involves
combining a multitude of sensor modalities such that the resulting output is better (i.e., more accurate, complete,
dependable etc.) than what it would have been if the data streams (hereinafter referred to as 'feeds') from the
resources are taken individually. However, these feeds lack any context-related information (e.g., detected event,
event classification, relationships to other events, etc.). This hinders the fusion process and may result in creating
an incorrect picture about the situation. Thus, results in false alarms, waste valuable time/resources.
In this paper, we propose an approach that enriches feeds with semantic attributes so that these feeds have
proper meaning. This will assist underlying applications to present analysts with correct feeds for a particular
event for fusion. We argue annotated stored feeds will assist in easy retrieval of historical data that may be
related to the current fusion. We use a subset of Web Ontology Language (OWL), OWL-DL to present a
lightweight and efficient knowledge layer for feeds annotation and use rules to capture crucial domain concepts.
We discuss a solution architecture and provide a proof-of-concept tool to evaluate the proposed approach. We
discuss the importance of such an approach with a set of user cases and show how a tool like the one proposed
could assist analysts, planners to make better informed decisions.
Multiple sensors with multiple modalities are being routinely deployed in forward areas to gain the situational
awareness. Some of the sensors are activity detection sensors such as acoustic, seismic, passive infrared (PIR), and
magnetic sensors which normally consume low power. These sensors often cue or wake up more power hungry sensors
such as imaging sensors, namely visible camera and infrared camera, and radar to either capture a picture or to track a
target of interest. Several airborne sensors routinely gather information on an area of interest using radar, imaging
sensors for intelligence, surveillance and reconnaissance (ISR) purposes. Recently, Empire Challenge has brought a new
concept: that is, harvesting the ISR data from the remotely distributed unattended ground sensors. Here aerial vehicle
flies by the area occasionally and queries if the sensors have any data to be harvested. Harvesting large amounts of data
is unnecessary and impractical - so some amount of fusion of the sensor data is essential.
KEYWORDS: Sensors, Personal digital assistants, Acoustics, Atmospheric propagation, Firearms, Signal detection, Commercial off the shelf technology, Operating systems, Clocks, Homeland security
In this paper an algorithm for sniper localization using disparate single microphone sensors that uses only the time
difference of arrival (TDOA) between muzzle blast and shock wave is presented. Just as in any algorithm that looks for
optimal solution this algorithm also faces the local minima (possible sniper locations) problem. In order to find the
global or near global solution one has to perform search over a large area. In order to reduce the computational burden,
the search space needs to be small. In this paper, an upper and lower bound on the range for the search space are
estimated using the sensor configuration. Based on this, the area around the bullets path is searched with the bounds on
range to determine the exact or near global solution for the sniper location. The results of sniper localization algorithm
applied to real data collected in a field test will be presented.
In this paper, we consider the problem of detecting the presence of footsteps using signal measurements from
a network of seismic sensors. Since the sensors are closely spaced, they result in correlated measurements. A
novel method for detection that exploits the spatial dependence of sensor measurements using copula functions is
proposed. An approach for selecting the copula function that is most suited for modeling the spatial dependence
of sensor observations is also provided. The performance of the proposed approach is illustrated using real
footstep signals collected using an experimental test-bed consisting of seismic sensors.
A group of acoustic arrays that provide direction of approach estimates also support classification of vehicles using
the beams formed during that estimation. Successful simultaneous tracking and classification has demonstrated
the value of such a sensing resource as a UGS installation. We now consider potential attacks on the integrity of
such an installation, describing the effect of compromised acoustic arrays in the data analysis and tracking and
classification results. We indicate how these can be automatically recognized, and note that calibration methods
intended for deployment time can be used for recovery during operation, which opens the door to methods for
recovery from the compromise without re-configuring the equipment, using abductive reasoning to discover the
necessary re-processing structure.
By rotating an acoustic array, the tracking stability and implied path of a tracked entity can be distorted
while leaving the data and analysis from individual arrays self-consistent. Less structured modifications, such as
unstructured re-ordering of microphone connections, impact the basic data analysis. We examine the effect of
these classes of attack on the integrity of a set of unattended acoustic arrays, and consider the steps necessary
for detection, diagnosis, and recovering an effective sensing system. Understaning these steps plays an important
part in reasoning in support of balance of investment, planning, operation and post-hoc analysis.
In this work we analyze the performance of several approaches to sniper localization in a network of mobile sensors.
Mobility increases the complexity of calibration (i.e., self localization, orientation, and time synchronization) in
a network of sensors. The sniper localization approaches studied here rely on time-difference of arrival (TDOA)
measurements of the muzzle blast and shock wave from multiple, distributed single-sensor nodes. Although
these approaches eliminate the need for self-orienting, node position calibration and time synchronization are
still persistent problems. We analyze the influence of geometry and the sensitivity to time synchronization and
node location uncertainties. We provide a Cramer-Rao bound (CRB) for location and bullet trajectory estimator
errors for each respective approach. When the TDOA is taken as the difference between the muzzle blast and
shock wave arrival times, the resulting localization performance is independent of time synchronization and is
less affected by geometry compared to other approaches.
KEYWORDS: Sensors, Signal processing, Signal detection, Surveillance, Data analysis, Analytical research, Seismic sensors, Signal analyzers, Wavelets, Transform theory
This paper describes experiments and analysis of seismic signals in addressing the problem of personnel detection
for indoor surveillance. Data was collected using geophones to detect footsteps from walking and running in
indoor environments such as hallways. Our analysis of the data shows the significant presence of nonlinearity,
when tested using the surrogate data method. This necessitates the need for novel detector designs that are not
based on linearity assumptions. We present one such method based on empirical mode decomposition (EMD)
and functional data analysis (FDA) and evaluate its applicability on our collected dataset.
One can think of human body as a sensory network. In particular, skin has several neurons that provide the sense of
touch with different sensitivities, and neurons for communicating the sensory signals to the brain. Even though skin
might occasionally experience some lacerations, it performs remarkably well (fault tolerant) with the failure of some
sensors.
One of the challenges in collaborative wireless sensor networks (WSN) is fault tolerant detection and localization of
targets. In this paper we present a biologically inspired architecture model for WSN. Diagnosis of sensors in WSN
model presented here is derived from the concept of the immune system. We present an architecture for WSN for
detection and localization of multiple targets inspired by human nervous system. We show that the advantages of such
bio-inspired networks are reduced data for communication, self-diagnosis to detect faulty sensors in real-time and the
ability to localize events. We present the results of our algorithms on simulation data.
Unattended ground sensors (UGS) are routinely used to collect intelligence, surveillance, and reconnaissance (ISR)
information. Unattended ground sensors consisting of microphone array and geophone are employed to detect rotary
wing aircraft.
This paper presents an algorithm for the detection of helicopters based on a fusion of rotor harmonics and acoustic-seismic
coupling. The main rotor blades of helicopters operate at a fixed RPM to prevent stalling or mechanical damage.
In addition, the seismic spectrum is dominated by the
acoustic-seismic coupling generated by these rotors; much more so
than ground vehicles and other targets where mechanical coupling and a more broadband acoustic source are strong
factors. First, an autocorrelation detection method identifies the constant fundamental generated by the helicopter main
rotor. Second, key matching frequencies between the acoustic and seismic spectrum are used to locate possible coupled
components. Detection can then be based on the ratio of the coupled seismic energy to total seismic energy. The results
of the two methods are fused over a few seconds time to provide an initial and continued detection of a helicopter within
the sensor range. Performance is measured on data as a function of range and sound pressure level (SPL).
KEYWORDS: Sensors, Acoustics, Electric field sensors, Infrared sensors, Fluctuations and noise, Magnetic sensors, Seismic sensors, Magnetism, Video, Detection and tracking algorithms
Multi-modal sensor suite mounted on a mobile platform such as a robot has several advantages. The robot can be sent
into a cave or a cleared building to observe and determine the presence of unwanted people prior to entering those
facilities. The robotic platform poses several challenges, for example, it can be noisy while it is in motion. Its electrical
activity might interfere with the magnetic and electric field sensors. Its vibrations may induce noise into seismic sensors.
We study the performance of acoustic, seismic, passive infrared (PIR), magnetic, electrostatic and video sensors for
detection of personnel mounted on a robotic platform such as a Packbot. The study focuses on the quality of sensor data
collected. In turn, the study would determine whether additional processing of data is required to mitigate the platform
induced noise for detection of personnel. In particular, the study focuses on the following: Whether different sensors
interfere with one another operating in close proximity, for example, the effect on magnetic and electrostatic sensors.
Comparison of personnel detection algorithms developed for mobile platform and stationary sensor suite in terms of
probability of detection, false alarms, and effects of fusion.
In this paper we present a multi-modal multi-sensor fusion algorithm for the
detection of personnel. The unattended ground sensors employed consist of acoustic,
seismic, passive infrared (PIR) and video camera. The individual sensor data is
processed and the probabilities of detection of a person are estimated. Then, the
individual probability estimates are fused to estimate the overall probability of detection
of a person. The confidence levels of each sensor modality are estimated based on a large
set of data. The performance of the algorithm is tested on data collected in an unoccupied
basement of a building with single and multiple people present.
The U.S. Army Research Laboratory has developed a real-time multi-modal sensor for the purpose of personnel detection in urban terrain. Possible system usage includes force protection and sniper early warning. The sensor system includes a network of MMUGS sensors, a third-party gateway and user interface device. A MMUGS sensor consists of the following functions: sensing, processing, and communication. Each sensor is composed of multiple sensing
modalities-acoustic, passive-infrared, and seismic. A MMUGS sensor is designed to be low cost and power efficient. This paper will first present an overview of the sensor architecture and then provide detailed descriptions of sub components. The paper will conclude with a detailed analysis of system performance. This paper is intended to provide details of the design, integration, and implementation of a MMUGS unit, and demonstrate the overall sensor system performance. This paper does not discuss the network aspect of the system and its affect on performance.
KEYWORDS: Sensors, Acoustics, Roads, Target detection, Data processing, Analytical research, Signal processing, Detection and tracking algorithms, Autoregressive models, Scene classification
The Acoustics Signal Processing Branch at the U.S. Army Research Laboratory has been investigating tracking and classification of military and civilian vehicles using acoustic sensors. Currently the target signals are modeled as a sum of harmonics and then they are classified using multivariate Gaussian classifier at each individual node. When multiple targets are present in the scene overall classification of the targets deteriorates as the signals from several targets are mixed together and determinations of individual target harmonics become difficult. This is true particularly for civilian vehicles. In order to improve the overall probability of correct classification a distributed classifier will be implemented. In a distributed processing each sensor node would broadcast the classification information, that is, probability of detection of various targets, to all the sensors within its vicinity. At each sensor node a distributed Bayesian classifier is used to determine the overall classification of each target. The distributed processing is robust to failures in sensor nodes unlike the centralized processing. Although the technique is known, it had been tested using only simulated data. In this paper we present the results of the algorithm on real data that was collected using several acoustic sensors using a mixture of military and civilian vehicles. This would identify how well the distributed processing works or its limitations in classifying multiple targets using acoustic data.
KEYWORDS: Sensors, Detection and tracking algorithms, Acoustics, Signal to noise ratio, Data fusion, Roads, Algorithm development, Databases, Roentgenium, Target detection
Target tracking and classification using passive acoustic signals is difficult at best as the signals are contaminated by wind noise, multi-path effects, road conditions, and are generally not deterministic. In addition, microphone characteristics, such as sensitivity, vary with the weather conditions. The problem is further compounded if there are multiple targets, especially if some are measured with higher signal-to-noise ratios (SNRs) than the others and they share spectral information.
At the U. S. Army Research Laboratory we have conducted several field experiments with a convoy of two, three, four and five vehicles traveling on different road surfaces, namely gravel, asphalt, and dirt roads. The largest convoy is comprised of two tracked vehicles and three wheeled vehicles. Two of the wheeled vehicles are heavy trucks and one is a light vehicle.
We used a super-resolution direction-of-arrival estimator, specifically the minimum variance distortionless response, to compute the bearings of the targets. In order to classify the targets, we modeled the acoustic signals emanated from the targets as a set of coupled harmonics, which are related to the engine-firing rate, and subsequently used a multivariate Gaussian classifier. Independent of the classifier, we find tracking of wheeled vehicles to be intermittent as the signals from vehicles with high SNR dominate the much quieter wheeled vehicles. We used several fusion techniques to combine tracking and classification results to improve final tracking and classification estimates. We will present the improvements (or losses) made in tracking and classification of all targets. Although improvements in the estimates for tracked vehicles are not noteworthy, significant improvements are seen in the case of wheeled vehicles. We will present the fusion algorithm used.
In this paper we present an algorithm to determine the location of an acoustic sensor array using the direction of arrival (DOA) estimates of a moving acoustic source whose ground truth is available. Determination of location and orientation of sensor array based on the statistics of errors in the DOA estimation is a nonlinear regression problem. We formulate and derive the necessary equations to solve this problem in terms of the bearing estimates of the acoustic source and its location. The algorithm is tested against helicopter data from three acoustic sensor arrays distributed over a field.
KEYWORDS: Detection and tracking algorithms, Acoustics, Roads, Sensors, Signal detection, Analytical research, Signal to noise ratio, Scene classification, Target detection, Algorithm development
In this paper we discuss an algorithm for classification and identification of multiple targets using acoustic signatures. We use a Multi-Variate Gaussian (MVG) classifier for classifying individual targets based on the relative amplitudes of the extracted harmonic set of frequencies. The classifier is trained on high signal-to-noise ratio data for individual targets. In order to classify and further identify each target in a multi-target environment (e.g., a convoy), we first perform bearing tracking and data association. Once the bearings of the targets present are established, we next beamform in the direction of each individual target to spatially isolate it from the other targets (or interferers). Then, we further process and extract a harmonic feature set from each beamformed output. Finally, we apply the MVG classifier on each harmonic feature set for vehicle classification and identification. We present classification/identification results for convoys of three to five ground vehicles.
KEYWORDS: Sensors, Detection and tracking algorithms, Acoustics, Algorithm development, Signal processing, Signal to noise ratio, Data processing, Neodymium, Analytical research, Target acquisition
In this paper we present an algorithm to track a convoy of several targets in a scene using acoustic sensor array data. The tracking algorithm is based on template of the direction of arrival (DOA) angles for the leading target. Often the first target is the closest target to the sensor array and hence the loudest with good signal to noise ratio. Several steps were used to generate a template of the DOA angle for the leading target, namely, (a) the angle at the present instant should be close to the angle at the previous instant and (b) the angle at the present instant should be within error bounds
of the predicted value based on the previous values. Once the template of the DOA angles of the leading target is developed, it is used to predict the DOA angle tracks of the remaining targets. In order to generate the tracks for the remaining targets, a track is established if the angles correspond to the initial track values of the first target. Second the time delay between the first track and the remaining tracks are estimated at the highest correlation points between the first track and the remaining tracks. As the vehicles move at different speeds the tracks either compress or expand
depending on whether a target is moving fast or slow compared to the first target. The expansion and compression ratios are estimated and used to estimate the predicted DOA angle values of the remaining targets. Based on these predicted DOA angles of the remaining targets the DOA angles obtained from the MVDR or Incoherent MUSIC will be
appropriately assigned to proper tracks. Several other rules were developed to avoid mixing the tracks.
The algorithm is tested on data collected at Aberdeen Proving Ground with a convoy of 3, 4 and 5 vehicles. Some of the vehicles are tracked and some are wheeled vehicles. The tracking algorithm results are found to be good. The results will be presented at the conference and in the paper.
We present an algorithm based on hidden Markov models (HMM) to detect several types of unexploded ordinance (UXO). We use the synthetic aperture radar (SAR) images simulated for 155 mm artillery shell, 2.75 in rocket and 105 mm mortar to generate the codebook. The algorithm is used on the data collected at Yuma Proving ground (YPG). YPG is seeded with several types of UXOs for testing purposes. The data is collected using an ultra wideband SAR mounted on a telescoping boom to simulate the airborne radar. The algorithm has detected all the targets for which it is trained for and it also detected other UXOs that are similar in shape.
Recent development of wideband, high-resolution SAR technology has shown that detecting buried targets over large open areas may be possible. Ground clutter and soil type are tow limiting factor influencing the practicality of using wideband SAR for wide-area target detection. In particular, the presence of strong ground clutter because of the unevenness, roughness or inconsistency of the soil itself may limit the radar's capability to resolve the target from the clutter. Likewise, the soil material properties can also play a major tole. The incident wave may experience significant attenuation as the wave penetrates lossy soil. In an attempt to more fully characterize this problem, fully polarimetric ultra-wideband measurements have been taken by the US Army Research Laboratory's SAR at test sites in Yuma, Arizona, and Elgin Air Force Base, Florida. SAR images have been generated for above-ground and subsurface unexploded ordnance targets, including 155-mm shells. Additionally, a full-wave method of moments (MoM) model has been developed for the electromagnetic scattering from these same targets, accounting for the lossy nature and frequency dependency of the various soils. An approximate model based on phys9cal optics (PO) has also been developed. The efficacy of using PO in lieu of the MoM to generate the electromagnetic scattering data is examined. We compare SAR images from the measured data with images produced by the MoM and PO simulations by using a standard back-projection technique.
We present results of an unexploded ordnance (UXO) detection algorithm based on template matching in ultra-wideband (130 MHz to 1.2 GHz) synthetic aperture radar (SAR) data. We compute scattered fields of UXOs in different orientations, both on the surface and buried at different depths, using a physical optics (PO) approximation for perfectly conducting targets in a half space via the half-space Green's function. The PO code that we developed computes the scattered fields in a lossy and dispersive material. This permits simulation of targets in real soil. The frequency-domain scattered fields are transformed into time-domain. SAR images of the UXOs at different aspect angles are generated by a standard backprojection technique, with the same resolution as the ground-penetrating ultra-wideband SAR. These SAR images form the templates for detection of the UXOs.
We present an automatic target detection (ATD) algorithm for foliage penetrating (FOPEN) ultra-wideband (UWB) synthetic aperture radar (SAR) data using split spectral analysis. Split spectral analysis is commonly used in the ultrasonic, non-destructive evaluation of materials using wide band pulses for flaw detection. In this paper, we show the application of split spectral analysis for detecting obscured targets in foliage using UWB pulse returns to discriminate targets from foliage, the data spectrum is split into several bands, namely, 20 to 75, 75 to 150, ..., 825 to 900 MHz. An ATD algorithm is developed based on the relative energy levels in various bands, the number of bands containing significant energy (spread of energy), and chip size (number of crossrange and range bins). The algorithm is tested on the (FOPEN UWB SAR) data of foliage and vehicles obscured by foliage collected at Aberdeen Proving Ground, MD. The paper presents various split spectral parameters used in the algorithm and discusses the rationale for their use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.