Centralized fusion ensures minimal information loss and maximizes the effectiveness of state estimation. Statistically, it is the optimal solution for all sensor fusion configurations. In this paper, we introduce a local-sensor-driven asynchronous low-level centralized fusion methodology that seamlessly integrates radar and camera data at the level of detections from each sensor. For a local-sensor-driven asynchronous system, detections from the two sensing modalities with different sampling rates are transmitted to a centralized filter, which is updated whenever it receives a measurement. We implemented the proposed algorithm and validated the results using real data from manufacturing and industrial work sites. The data was obtained by Plato System’s Argus perception system, which combines high-resolution imaging mm-wave radar with camera sensors to provide indoor and outdoor activity tracking. We further compare the fusion results with vision-only MOT, as well as track-level fusion (track-to-track fusion).
The 3D trajectory estimation and observability problems of a target have been solved by using angle-only measurements. In previous works, the measurements were obtained in the thrusting/ballistic phase from a single fixed passive sensor. The present work solves the motion parameter estimation of a ballistic target in the reentry phase from a moving passive sensor on a fast aircraft. This is done with a 7-dimension motion parameter vector (velocity azimuth angle, velocity elevation angle, drag coefficient, target speed and 3D position). The maximum likelihood (ML) estimator is used for the motion parameter estimation at the end of the observation interval. Then we can predict the future position at an arbitrary time and the impact point of the target. The observability of the system is verified numerically via the invertibility of the Fisher information matrix. The Cramer–Rao lower bound for the estimated parameter vector is evaluated, and it shows that the estimates are statistically efficient. Simulation results show complete observability from the scenario considered, which illustrates that a single fast moving sensor platform for a target can estimate the motion parameter in the reentry phase.
Sensors are prone to biases which can lead to inaccurate associations and hence poor results in target tracking. The sensors used on autonomous vehicles (AV) are placed together or very close (practically collocated) which makes the bias estimation challenging. This work considers the bias estimation for two collocated synchronized sensors with slowly varying additive biases. The biases’ observability condition is met when the two sensors’ biases are Ornstein-Uhlenbeck stochastic processes with different time constants. The proposed bias estimation is independent of state estimation and bias models are identified based on sample autocorrelations. With bias-compensated observations, the fused measurement can be obtained using the Maximum Likelihood fusion technique. In experiments, two collocated lidars (different manufacturer models) are tested in real time. It is shown that the uncertainties of biases are significantly reduced by the estimation algorithm presented. The observation error reduction is up to 77% with bias-compensated measurement fusion and the bias uncertainty (root mean square error) has reduction up to 45% after fusion compared to the single lidar scenario.
Object detection provides information needed for target tracking and plays a core role in autonomous driving. In this work, we study the uncertainty in the estimation of the centroid (position) of a bounding box of the measurements from an object detected by the sensor of an autonomous vehicle (AV). The estimated centroid uncertainty will be used in object tracking as measurement noise variance, which is not available from the sensor manufacturer, for measurement association and target state estimation. When the (position) uncertainty that captures the noise inherent in the sensor observations is available for each detected point (this can be done using Bayesian deep learning), the bounding box centroid uncertainty is obtained using a Least-Squares estimator (LS). When the uncertainty for each detected point is not available, one can assume a uniform distribution of the clustered points in a single rectangular bounding box. A Maximum Likelihood estimator is used for the bounding box centroid estimation. Experiments using real data are carried out to show the performance of proposed methods for autonomous driving applications. A comparison with the sample mean approach showed the superiority of new algorithm.
This paper considers the problem of estimating the launch point (LP) of a thrusting object from a single fixed sensor’s 2-D angle-only measurements (azimuth and elevation). It is assumed that the target follows a mass ejection model and the measurements obtained are available starting a few seconds after the launch time due to limited visibility. Previous works on this problem estimate the target’s state, which, for a passive sensor, requires a long batch of measurements, is sensitive to noise and ill-conditioned. In this paper, a polynomial fitting with the least squares approach is presented to estimate the LP without motion state estimation. We provide a statistical analysis to choose the optimal polynomial order, including overfitting and underfitting evaluation. Next, we present Monte Carlo simulations to show the performance of the proposed approach and compare it to the much more complicated state of the art technique that relies on state estimation. It is shown that the proposed method provides a much simpler and effective way than the state estimation methods to implement in a real-time system.
In autonomous driving systems, distributed sensor fusion is widely used where each sensor has its tracking system and only the local tracks (LT) are transmitted to the fusion center. We consider the fusion of LTs taking into account all the Fusion Center (FC) track-to-LT association hypotheses via probabilities in the proposed Hybrid Probabilistic Information Matrix Fusion (HPIMF) algorithm. In HPIMF, the track association and fusion are carried out with probabilistic weightings rather than using a single track association only. Different from track-to- track fusion (T2TF), which is one of the most commonly used approaches for distributed tracking systems, the associations considered in HPIMF are between the predicted FC state and the LTs from local sensors. At each time for an association event, up to one of the tracks within the track list of a local sensor can be associated with the FC state. In real world scenarios there can be large uncertainties and missed tracks due to sensor imperfections and sensor-target geometry. Consequently, the association might be unreliable and fusion based on only a single association hypothesis could fail. It is shown in the simulations of a realistic autonomous driving system that HPIMF can successfully track a target of interest and is superior to T2TF which relies on hard association decisions.
The problem of selecting a target of interest for interdiction in the presence of several spurious tracks that are meant to confuse the defense has been around for several decades. The spurious tracks are from the objects released from the target of interest and they move forward at the same speed as the target of interest. They separate due to a release velocity orthogonal to the forward motion. The main means of carrying out the discrimination between the target of interest and the spurious tracks discussed in the literature is using some features, which, however, are not always available. The present work considers this problem when the extraneous tracks “look” the same as the target of interest for the sensor tracking them, i.e., they have no distinguishing features. It is shown that the history of the track kinematics — the evolution of the tracks — can be used via “track segment association” to select the track of the target of interest from the several tracks in the field of view of the sensor. One of the challenges of this work is that, with limited resolution capability, the observations from the sensor are unresolved when the extraneous targets start separating. In this work, the data association and tracking are handled separately from track segment association, which reduces the complexity of the problem and is shown to have timely and reliable results in the simulations.
The sensor bias estimation problem is crucial in autonomous driving systems for perception and target tracking. This work considers the bias estimation for two collocated synchronized sensors with slowly varying, additive biases. The differences between the two sensors’ observations are used to eliminate the target state. Consequently, the bias estimation is independent from the target state estimation. The biases’ observability condition is met when the two sensors’ biases are Ornstein-Uhlenbeck stochastic processes with different time constants. A Maximum-Likelihood measurement fusion technique is introduced for the bias-compensated observations. Simulation results, for several scenarios with various bias model parameters, prove the consistency of the estimator. It is shown that the uncertainties of biases are significantly reduced by the estimation algorithm presented. The sensitivity of the proposed algorithm is also tested with mismatched filters.
KEYWORDS: Active sensors, Sensors, Error analysis, Monte Carlo methods, Process modeling, Passive sensors, Motion models, Information fusion, Tantalum, Associative arrays
Track-to-track fusion (T2TF) has been studied widely for both homogeneous and heterogeneous cases, these cases denoting common and disparate state models. However, as opposed to homogeneous fusion, the cross-covariance for heterogeneous local tracks in different state spaces that accounts for the relationship between the process noises of the heterogeneous models seems not to be available in the literature. The present work provides the derivation of the cross-covariance for heterogeneous local tracks of different dimensions where the local states are related by a nonlinear transformation (with no inverse transformation). First, the relationship between the process noise covariances of the motion models in different state spaces is obtained. The cross-covariance of the local estimation errors is then derived in a recursive form by taking into account the relationship between the local state model process noises. In our simulations, linear minimum mean square (LMMSE) fusion is carried out for a scenario of two tracks of a target from two local trackers, one from an active sensor and one from a passive sensor.
For a thrusting/ballistic target, works have shown that a single fixed sensor with 2-D angle-only measurements (azimuth and elevation angles) is able to estimate the target’s 3-D trajectory. In previous works, the measure- ments have been considered as starting either from the launch point or with a delayed acquisition. In the latter case, the target is in flight and thrusting. The present work solves the estimation problem of a target with delayed acquisition after burn-out time (BoT), i.e. in the ballistic stage. This is done with a 7-D parameter vector (velocity vector azimuth angle and elevation angle, drag coefficient, 3-D acquisition position and target speed at the acquisition time) assuming noiseless motion. The Fisher Information Matrix (FIM) is evaluated to prove the observability numerically. The Maximum Likelihood (ML) estimator is used for the motion parameter estimation at acquisition time. The impact point prediction (IPP) is then carried out with the ML estimate. Simulation results from the scenarios considered illustrate that the MLE is efficient.
A passive ranging problem with elevation angle, azimuth angle and signal intensity measurements is presented and solved with a Maximum Likelihood (ML) estimator. The measurements used in the estimation are all obtained from a single passive sensor at a fixed location. The intensity measurement, which obeys the inverse square law w.r.t. the squared distance between the sensor and target has an unknown emitted energy that needs to be taken into account in the estimation problem. The Fisher Information Matrix (FIM) is investigated and used for observability testing. The simulation results from the scenarios considered prove the efficiency of the ML estimator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.