Accurately counting numbers people is useful in many applications. Currently, camera-based systems assisted by computer vision and machine learning algorithms represent the state-of-the-art. However, they have limited coverage areas and are prone to blind spots, obscuration by walls, shadowing of individuals in crowds, and rely on optimal positioning and lighting conditions. Moreover, their ability to image people raises ethical and privacy concerns. In this paper we propose a distributed multistatic passive WiFi radar (PWR) consisting of 1 reference and 3 surveillance receivers, that can accurately count up to six test subjects using Doppler frequency shifts and intensity data from measured micro-Doppler (µ-Doppler) spectrograms. To build the person-counting processing model, we employ a multi-input convolutional neural network (MI-CNN). The results demonstrate a 96% counting accuracy for six subjects when data from all three surveillance channels are utilised.
A Federated Learning approach consists of creating an AI model from multiple data sources, without moving large amounts of data across to a central environment. Federated learning can be very useful in a tactical coalition environment, where data can be collected individually by each of the coalition partners, but network connectivity is inadequate to move the data to a central environment. However, such data collected is often dirty and imperfect. The data can be imbalanced, and in some cases, some classes can be completely missing from some coalition partners. Under these conditions, traditional approaches for federated learning can result in models that are highly inaccurate. In this paper, we propose approaches that can result in good machine learning models even in the environments where the data may be highly skewed, and study their performance under different environments.
Deep Neural Networks (DNNs) have achieved near human and in some cases super human accuracies in tasks such as machine translation, image classification, speech processing and so on. However, despite their enormous success these models are often used as black-boxes with very little visibility into their working. This opacity of the models often presents hindrance towards the adoption of these models for mission-critical and human-machine hybrid networks.
In this paper, we will explore the role of influence functions towards opening up these black-box models and for providing interpretability of their output. Influence functions are used to characterize the impact of training data on the model parameters. We will use these functions to analytically understand how the parameters are adjusted during the model training phase to embed the information contained in the training dataset. In other words, influence functions allows us to capture the change in the model parameters due to the training data. We will then use these parameters to provide interpretability of the model output for test data points.
In this paper, we discuss the problem of distributed learning for coalition operations. We consider a scenario where different coalition forces are running learning systems independently but want to merge the insights obtained from all the learning systems to share knowledge and use a single model combining all of their individual models. We consider the challenges involved in such fusion of models, and propose an algorithm that can find the right fused model in an efficient manner.
In order to get the modularity and reconfigurability for sensor information fusion services in modern battle-spaces, dynamic service composition and dynamic topology determination is needed. In the current state-of-the-art, such information fusion services are composed manually and in a programmatic manner. In this paper, we consider an approach towards more automation by assuming that the topology of a solution is provided, and automatically choosing the different types and kinds of algorithms which can be used at each step. This includes the use of contextual information and techniques such as multi-arm bandits for investing the exploration and exploitation tradeoff.
Through its ability to create situation awareness, multi-target target tracking is an extremely important capability for almost any kind of surveillance and tracking system. Many approaches have been proposed to address its inherent challenges. However, the majority of these approaches make two assumptions: the probability of detection and the clutter rate are constant. However, neither are likely to be true in practice. For example, as the projected size of a target becomes smaller as it moves further from the sensor, the probability of detection will decline. When target detection is carried out using templates, clutter rate will depend on how much the environment resembles the current target of interest.
In this paper, we begin to investigate the impacts on these effects. Using a simulation environment inspired by the challenges of Wide Area Surveillance (WAS), we develop a state dependent formulation for probability of detection and clutter. The impacts of these models are compared in a simulated urban environment populated by multiple vehicles and cursed with occlusions. The results show that accurate modelling the effects of occlusion and degradation in detection, significant improvements in performance can be obtained.
Changes in military operations in recent years underscore changes in the requirements of military units. One of the largest underlying changes is the transformation from large-scale battles to quick-reaction mobile forces. There is also pressure to reduce the number of warfighters at risk in operations. One resultant need of these two factors is the increased need for situation awareness (SA); another is the use of unmanned vehicles, which increases the difficulty for the dismounted warfighter to maintain SA. An augmented reality (AR) system is a type of synthetic vision system that mixes computer-generated graphics (or annotations) with the real world. Annotations provide information aimed at establishing SA and aiding decision making. The AR system must decide what annotations to show and how to show them to ensure that the display is intuitive and unambiguous. We analyze the problem domain of military operations in urban terrain. Our goal is to determine the utility a synthetic vision system like AR can provide to a dismounted warfighter. In particular, we study the types of information that a warfighter is likely to find useful when working with teams of other warfighters. The problem domain is challenging because teammates may be occluded by urban infrastructure and may include unmanned vehicles operating in the environment. We consider the tasks of dynamic planning and deconfliction, navigation, target identification, and identification of friend or foe. We discuss the issues involved in developing a synthetic vision system, the usability goals that will measure how successful a system will be, and the use cases driving our development of a prototype system.
Through their ability to safely collect video and imagery from remote and potentially dangerous locations, UAVs have already transformed the battlespace. The effectiveness of this information can be greatly enhanced through synthetic vision. Given knowledge of the extrinsic and intrinsic parameters of the camera, synthetic vision superimposes spatially-registered computer graphics over the video feed from the UAV. This technique can be used to show many types of data such as landmarks, air corridors, and the locations of friendly and enemy forces. However, the effectiveness of a synthetic vision system strongly depends on the accuracy of the registration - if the graphics are poorly aligned with the real world they can be confusing, annoying, and even misleading.
In this paper, we describe an adaptive approach to synthetic vision that modifies the way in which information is displayed depending upon the registration error. We describe an integrated software architecture that has two main components. The first component automatically calculates registration error based on information about the uncertainty in the camera parameters. The second component uses this information to modify, aggregate, and label annotations to make their interpretation as clear as possible. We demonstrate the use of this approach on some sample datasets.
A key enabler for Network Centric Warfare (NCW) is a sensor network that can collect and fuse vast amounts
of disparate and complementary information from sensors that are geographically dispersed throughout the
battlespace. This information will lead to better situation awareness so that commanders will be able to act faster
and more effectively. However, these benefits are possible only if the sensor data can be fused and synthesized for
distribution to the right user in the right form at the right time within the constraints of available bandwidth.
In this paper we consider the problem of developing Level 1 data fusion algorithms for disparate fusion in
NCW. These algorithms must be capable of operating in a fully distributed (or decentralized) manner; must be
able to scale to extremely large numbers of entities; and must be able to combine many disparate types of data.
To meet these needs we propose a framework that consists of three main components: an attribute-based
state representation that treats an entity state as a collection of attributes, new methods or interpretations of
uncertainty, and robust algorithms for distributed data fusion. We illustrate the discussion in the context of
maritime domain awareness, mobile adhoc networks, and multispectral image fusion.
This paper discusses our usability engineering process for the Battlefield Augmented Reality System (BARS). Usability engineering is a structured, iterative, stepwise development process. Like the related disciplines of software and systems engineering, usability engineering is a combination of management principals and techniques, formal and semi- formal evaluation techniques, and computerized tools. BARS is an outdoor augmented reality system that displays heads- up battlefield intelligence information to a dismounted warrior. The paper discusses our general usability engineering process. We originally developed the process in the context of virtual reality applications, but in this work we are adapting the procedures to an augmented reality system. The focus of this paper is our work on domain analysis, the first activity of the usability engineering process. We describe our plans for and our progress to date on our domain analysis for BARS. We give results in terms of a specific urban battlefield use case we have designed.
Many future missions for mobile robots demand multi-robot systems which are capable of operating in large environments for long periods of time. A critical capability is that each robot must be able to localize itself. However, GPS cannot be used in many environments (such as within city streets, under water, indoors, beneath foliage or extra-terrestrial robotic missions) where mobile robots are likely to become commonplace. A widely researched alternative is Simultaneous Localization and Map Building (SLAM): the vehicle constructs a map and, concurrently, estimates its own position. In this paper we consider the problem of building and maintaining an extremely large map (of one million beacons). We describe a fully distributed, highly scaleable SLAM algorithm which is based on distributed data fusion systems. A central map is maintained in global coordinates using the Split Covariance Intersection (SCI) algorithm. Relative and local maps are run independently of the central map and their estimates are periodically fused with the central map.
KEYWORDS: Robots, Mobile robots, Sensors, Algorithm development, Process modeling, Robotics, Data fusion, Filtering (signal processing), Computing systems, Robotic systems
Many of the future missions for mobile robots demand multi- robot systems which are capable of operating in large environments for long periods of time. One of the most critical capabilities is the ability to localize- a mobile robot must be able to estimate its own position and to consistently transmit this information to other robots and control sites. Although state-of-the-art GPS is capable of yielding unmatched performance over large areas, it is not applicable in many environments (such as within city streets, under water, indoors, beneath foliage or extra- terrestrial robotic missions) where mobile robots are likely to become commonplace. A widely researched alternative is Simultaneous Localization and Map Building (SLAM): the vehicle constructs a map and, concurrently, estimates its own position. However, most approaches are non-scalable (the storage and computational costs vary quadratically and cubically with the number of beacons in the map) and can only be used with multiple robotic vehicles with a great degree of difficulty. In this paper, we describe the development of a scalable, multiple-vehicle SLAM system. This system, based on the Covariance Intersection algorithm, is scalable- its storage and computational costs are linearly proportional to the number of beacons in the map. Furthermore, it is scalable to multiple robots- each has complete freedom to exchange partial or full map information with any other robot at any other time step. We demonstrate the real-time performance of this system in a scenario of 15,000 beacons.
The Naval Research Laboratory (NRL) has spearheaded the development and application of Covariance Intersection (CI) for a variety of decentralized data fusion problems. Such problems include distributed control, onboard sensor fusion, and dynamic map building and localization. In this paper we describe NRL's development of a CI-based navigation system for the NASA Mars rover that stresses almost all aspects of decentralized data fusion. We also describe how this project relates to NRL's augmented reality, advanced visualization, and REBOT projects.
Battlefield situation awareness is the most fundamental prerequisite for effective command and control. Information about the state of the battlefield must be both timely and accurate. Imagery data is of particular importance because it can be directly used to monitor the deployment of enemy forces in a given area of interest, the traversability of the terrain in that area, as well as many other variables that are critical for tactical and force level planning. In this paper we describe prototype REmote Battlefield Observer Technology (REBOT) that can be deployed at specified locations and subsequently tasked to transmit high resolution panoramic imagery of its surrounding area. Although first generation REBOTs will be stationary platforms, the next generation will be autonomous ground vehicles capable of transporting themselves to specified locations. We argue that REBOT fills a critical gap in present situation awareness technologies. We expect to provide results of REBOT tests to be conducted at the 1999 Marines Advanced Warfighting Demonstration.
The dynamics of many physical system are nonlinear and non- symmetric. The motion of a missile, for example, is strongly determined by aerodynamic drag whose magnitude is a function of the square of speed. Conversely, nonlinearity can arise from the coordinate system used, such as spherical coordinates for position. If a filter is applied these types of system, the distribution of its state estimate will be non-symmetric. The most widely used filtering algorithm, the Kalman filter, only utilizes mean and covariance and odes not maintain or exploit the symmetry properties of the distribution. Although the Kalman filter has been successfully applied in many highly nonlinear and non- symmetric system, this has been achieved at the cost of neglecting a potentially rich source of information. In this paper we explore methods for maintaining and utilizing information over and above that provided by means and covariances. Specifically, we extend the Kalman filter paradigm to include the skew and examine the utility of maintaining this information. We develop a tractable, convenient algorithm which can be used to predict the first three moments of a distribution. This is achieved by extending the sigma point selection scheme of the unscented transformation to capture the mean, covariance and skew. The utility of maintaining the skew and using nonlinear update rules is assessed by examining the performance of the new filter against a conventional Kalman filter in a realistic tracking scenario.
KEYWORDS: Virtual reality, Visualization, 3D displays, Oceanography, Space operations, Telecommunications, Information technology, 3D modeling, Software development, Prototyping
Gaining a detailed and thorough understanding of the modern battle space is vital to the success of any military operation. Military commanders have access to significant quantities of information which originates from disparate and occasionally conflicting sources and systems. Combining this information into a single, coherent view of the environment can be extremely difficult, error prone and time consuming. In this paper we describe the Naval Research Laboratory's Virtual Reality Responsive Workbench (VRRWB) and Dragon software system which together address the problem of battle space visualization. The VRRWB is a stereoscopic 3D interactive graphics system which allows multiple participants to interact in a shared virtual environment and physical space. A graphical representation of the battle space, including the terrain and military assets which lie on it, is displayed on a projection table. Using a six degree of freedom tracked joystick, the user navigates through the environment and interacts, via selection and querying, with the represented assets and the terrain. The system has been successfully deployed in the Hunter Warrior Advanced Warfighting Exercise and the Joint Countermine ACTD Demonstration One. In this paper we describe the system and its capabilities in detail, discuss its performance in these two operations, and describe the lessons which have been learned.
KEYWORDS: Sensors, Motion models, Process modeling, Performance modeling, Navigation systems, Roads, Systems modeling, Filtering (signal processing), 3D modeling, Data modeling
There has been a significant increase in the use of Autonomous Guided Vehicles in ports, mines and other primary industries. Many of these applications require vehicles which operate safely and efficiently in unstructured environments at speeds approaching those of human- controlled vehicles. Meeting these objectives is extremely difficult and arguably one of the most important requirements is an accurate and robust localization system. In this paper we describe the development of a prototype, Kalman filter-based localization system for a conventional road vehicle operating in an outdoor environment at speeds in excess of 15 ms-1. Using sparsely placed beacons, vehicle position can be resolved to the order of a meter. Three main themes are addressed. The first is a quantitative methodology for sensor suite design. Sensors are classified according to their frequency responses and the suite is chosen to ensure a uniform response across the spectrum of vehicle maneuvers. The second theme develops accurate, high-order nonlinear models of vehicle motion which incorporate kinematics, dynamics and slip due to type deformation. Each model is useful within a certain operating regime. Outside of this regime the model can fail and the third theme avoids this problem through the use multiple models algorithms which synergistically fuse the properties of several models. Through addressing these themes we have developed a navigation system which as been shown to be accurate and robust to different types of road surface and occasional sensor data loss. The theories and principles developed in this paper are being used to develop a navigation system for commercial mining vehicles.
The Kalman Filter (KF) is one of the most widely used methods for tracking and estimation due to its simplicity, optimality, tractability and robustness. However, the application of the KF to nonlinear systems can be difficult. The most common approach is to use the Extended Kalman Filter (EKF) which simply linearizes all nonlinear models so that the traditional linear Kalman filter can be applied. Although the EKF (in its many forms) is a widely used filtering strategy, over thirty years of experience with it has led to a general consensus within the tracking and control community that it is difficult to implement, difficult to tune, and only reliable for systems which are almost linear on the time scale of the update intervals. In this paper a new linear estimator is developed and demonstrated. Using the principle that a set of discretely sampled points can be used to parameterize mean and covariance, the estimator yields performance equivalent to the KF for linear systems yet generalizes elegantly to nonlinear systems without the linearization steps required by the EKF. We show analytically that the expected performance of the new approach is superior to that of the EKF and, in fact, is directly comparable to that of the second order Gauss filter. The method is not restricted to assuming that the distributions of noise sources are Gaussian. We argue that the ease of implementation and more accurate estimation features of the new filter recommend its use over the EKF in virtually all applications.
The covariance intersection (CI) framework represents a generalization of the Kalman filter that permits filtering and estimation to be performed in the presence of unmodeled correlations. As described in previous papers, unmodeled correlations arise in virtually all real-world problems; but in many applications the correlations are so significant that they cannot be 'swept under the rug' simply by injecting extra stabilizing noise within a traditional Kalman filter. In this paper we briefly describe some of the properties of the CI algorithm and demonstrate their relevance to the notoriously difficult problem of simultaneous map building and localization for autonomous vehicles.
A significant problem in tracking and estimation is the consistent transformation of uncertain state estimates between Cartesian and spherical coordinate systems. For example, a radar system generates measurements in its own local spherical coordinate system. In order to combine those measurements with those from other radars, however, a tracking system typically transforms all measurements to a common Cartesian coordinate system. The most common approach is to approximate the transformation through linearization. However, this approximation can lead to biases and inconsistencies, especially when the uncertainties on the measurements are large. A number of approaches have been proposed for using higher order transformation modes, but these approaches have found only limited use due to the often enormous implementation burdens incurred by the need to derive Jacobians and Hessians. This paper expands a method for nonlinear propagation which is described in a companion paper. A discrete set of samples are used to capture the first four moments of the untransformed measurement. The transformation is then applied to each of the samples, and the mean and covariance are calculated from the result. It is shown that the performance of the algorithm is comparable to that of fourth order filters, thus ensuring consistency even when the uncertainty is large. It is not necessary to calculate any derivatives, and the algorithm can be extended to incorporate higher order information. The benefits of this algorithm are illustrated in the contexts of autonomous vehicle navigation and missile tracking.
In navigation and tracking problems, the identification of an appropriate model of vehicular or target motion is vital to most practical data fusion algorithms. The true system dynamics are rarely known, and approximations are usually employed. Since systems can exhibit strikingly different behaviors, multiple models may be needed to describe each of these behaviors. Current methods either use model switching (a single process model is chosen from the set using a decision rule) or consider the models as a set of competing hypothesis, only one of which is 'correct'. However, these methods fail to exploit the fact that all models are of the same system and that all of them are, to some degree, 'correct'. In this paper we present a new paradigm for fusing information from a set of multiple process models. The predictions from each process model are regarded as observations which are corrupted by correlated noise. By employing the standard Kalman filter equations we combine data from multiple sensors and multiple process models optimally. There are a number of significant practical advantages to this technique. First, the performance of the system always equals or betters that of the best estimator in the set of models being used. Second, the same decision theoretic machinery can be used to select the process models as well as the sensor suites.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.