This paper proposes a vision-aided system to control the horizontal velocity of Unmanned Aerial Vehicles (UAVs). It fuses data from an Inertial Measurement Unit (IMU) and optical flow sensor to measure horizontal velocity. The IMU provides angular rate and acceleration data, while the optical flow sensor provides a two-dimensional incremental displacement of the scene in view. Fusing these complementary data sources facilitates velocity control without Global Positioning System (GPS) dependency. A series of simulations validated the system’s effectiveness, performed at lower altitudes where the optical flow sensor functions best. Results demonstrate that fusing the sensors enables accurate horizontal velocity control, reducing position drift and navigation error compared to using inertial data alone.
A precise relative localization system is a crucial necessity for a swarm of Unmanned Aerial Vehicles (UAVs), particularly when collaborating on a task. This paper aims to provide an alternative navigation system to enable a swarm of UAVs to conduct autonomous missions in a Global Positioning System (GPS)-denied environment. To achieve this goal, this paper proposes a relative navigation system using an Extended Kalman Filter (EKF) fusing observations from the on-board Inertial Measurement Unit (IMU) with ranging measurements obtained from the on-board ranging sensors. To ensure secure and high data communication rates, the system employs two waveforms and a low-cost beam-switching phased array. This system thus enables drone operations even in GPS-denied environments. We demonstrate the effectiveness of our approach through simulation experiments involving a swarm of six drones, which includes three fixed and three moving drones in a challenging Blue-Angel scenario. The evaluation of the statistical tests on the results of the simulations shows that this method is efficient.
This paper addresses one of the key requirements for a successful terminal phase defense intercept, namely the ability to discriminate between the reentry vehicle (RV) and decoys, using space-based infrared (IR) sensors. In the terminal phase, light objects (decoys) slow down faster due to atmospheric drag and follow substantially different trajectories than heavy objects (RV). Therefore, the targets' velocity information will be used to differentiate between the RVs and decoys trajectories within a validating time window. The evaluation of the corresponding Cramér-Rao Lower Bound (CRLB) on the covariance of the estimates, and the statistical tests on the results of simulations show that this method is statistically efficient.
Satellite based imaging sensors are subjected to several factors that may cause the values of the calibration parameters to vary between the time of ground calibration and on-orbit operation. This paper considers the problem of satellite based imaging sensors calibration, while estimating the state of a target of opportunity. The 2D pixel based measurements (estimated location of the target’s image in the Focal Plane Array - (FPA)) generated by these sensors are used to estimate the sensors pointing angle biases. The noisy measurements provided by these sensors are assumed to be perfectly associated, i.e., they belong to the same target. The proposed algorithm leads to a maximum likelihood bias estimator. The evaluation of the corresponding Cramer- Rao Lower Bound (CRLB) on the covariance of the bias estimates, and the statistical tests on the results of simulations show that both the target trajectory and the biases are observable and this method is statistically efficient.
In order to carry out data fusion, it is crucial to account for the imprecision of sensor measurements due to systematic errors. This requires estimation of the sensor measurement biases. In this paper, we consider a 3D multisensor multitarget bias estimation approach for both additive and multiplicative biases in the measurements. Multiplicative biases can more accurately represent real biases in many sensors, however, they increase the complexity of the estimation problem. By converting biased measurements into pseudo-measurements of the biases it is possible to estimate biases separately from target state estimation. The conversion of the spherical measurements to Cartesian measurements, which has to be done using the unbiased conversion, is the key that allows estimation of the sensor biases without having to estimate the states of the targets of opportunity. The measurements provided by these sensors are assumed time-coincident (synchronous) and perfectly associated. We evaluate the Cram´er-Rao Lower Bound (CRLB) on the covariance of the bias estimates, which serves as a quantification of the available information about the biases.
Bias estimation for multiple passive sensors using common targets of opportunity has been researched extensively. However, the proposed solutions required the use of multiple (two or more) passive sensors. In order to remove this constraint, we provide in this paper a new methodology using a single exoatmospheric target of opportunity seen in a single satellite borne sensor’s field of view to estimate the sensor’s biases simultaneously with the state of the target. The satellite is equipped with an optical sensor that provides the Line Of Sight (LOS) measurements of azimuth and elevation to the target. The evaluation of the Cram´er-Rao Lower Bound (CRLB) on the covariance of the bias estimates, and the statistical tests on the results of simulations show that this method is statistically efficient.
KEYWORDS: Sensors, Satellites, Target detection, Space sensors, Missiles, Monte Carlo methods, Optical sensors, 3D acquisition, Optimization (mathematics), Statistical analysis
In this paper, an approach to bias estimation in the presence of measurement association uncertainty using
common targets of opportunity, is developed. Data association is carried out before the estimation of sensor angle
measurement biases. Consequently, the quality of data association is critical to the overall tracking performance.
Data association becomes especially challenging if the sensors are passive. Mathematically, the problem can
be formulated as a multidimensional optimization problem, where the objective is to maximize the generalized
likelihood that the associated measurements correspond to common targets, based on target locations and sensor
bias estimates. Applying gating techniques significantly reduces the size of this problem. The association
likelihoods are evaluated using an exhaustive search after which an acceptance test is applied to each solution
in order to obtain the optimal (correct) solution. We demonstrate the merits of this approach by applying it to
a simulated tracking system, which consists of two satellites tracking a ballistic target. We assume the sensors
are synchronized, their locations are known, and we estimate their orientation biases together with the unknown
target locations.
Integration of space based sensors into a Ballistic Missile Defense System (BMDS) allows for detection and tracking of threats over a larger area than ground based sensors [1]. This paper examines the effect of sensor bias error on the tracking quality of a Space Tracking and Surveillance System (STSS) for the highly non-linear problem of tracking a ballistic missile. The STSS constellation consists of two or more satellites (on known trajectories) for tracking ballistic targets. Each satellite is equipped with an IR sensor that provides azimuth and elevation to the target. The tracking problem is made more difficult due to a constant or slowly varying bias error present in each sensor's line of sight measurements. It is important to correct for these bias errors so that the multiple sensor measurements and/or tracks can be referenced as accurately as possible to a common tracking coordinate system. The measurements provided by these sensors are assumed time-coincident (synchronous) and perfectly associated. The line of sight (LOS) measurements from the sensors can be fused into measurements which are the Cartesian target position, i.e., linear in the target state. We evaluate the Cramér-Rao Lower Bound (CRLB) on the covariance of the bias estimates, which serves as a quantification of the available information about the biases. Statistical tests on the results of simulations show that this method is statistically efficient, even for small sample sizes (as few as two sensors and six points on the (unknown) trajectory of a single target of opportunity). We also show that the RMS position error is significantly improved with bias estimation compared with the target position estimation using the original biased measurements.
KEYWORDS: Sensors, Error analysis, Statistical analysis, Monte Carlo methods, Passive sensors, Optical sensors, Data fusion, Composites, 3D metrology, Detection and tracking algorithms
In order to carry out data fusion, registration error correction is crucial in multisensor systems. This requires estimation of the sensor measurement biases. It is important to correct for these bias errors so that the multiple sensor measurements and/or tracks can be referenced as accurately as possible to a common tracking coordinate system. This paper provides a solution for bias estimation for the minimum number of passive sensors (two), when only targets of opportunity are available. The sensor measurements are assumed time-coincident (synchronous) and perfectly associated. Since these sensors provide only line of sight (LOS) measurements, the formation of a single composite Cartesian measurement obtained from fusing the LOS measurements from different sensors is needed to avoid the need for nonlinear filtering. We evaluate the Cramer-Rao Lower Bound (CRLB) on the covariance of the bias estimate, i.e., the quantification of the available information about the biases. Statistical tests on the results of simulations show that this method is statistically efficient, even for small sample sizes (as few as two sensors and six points on the trajectory of a single target of opportunity). We also show that the
RMS position error is significantly improved with bias estimation compared with the target position estimation using the original biased measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.