Recent years have seen significant advances in artificial intelligence (AI) and machine learning (ML) technologies applicable to coalition situational understanding (CSU). However, state-of-the-art ML techniques based on deep neural networks require large volumes of training data; unfortunately, representative training examples of situations of interest in CSU are usually sparse. Moreover, to be useful, ML-based analytic services must be capable of explaining their outputs. We describe an integrated CSU architecture that combines neural networks with symbolic learning and reasoning to address the problem of sparse training data. We also demonstrate how explainability can be achieved for deep neural networks operating on multimodal sensor feeds. The work focuses on real-time decision making settings at the tactical edge, with both the symbolic and neural network parts of the system --- including the explainabilty approaches --- able to deal with temporal features.
Autonomous systems are expected to have a major impact in future coalition operations. These systems are enabled by a variety of Artificial Intelligence (AI) learning algorithms that contextualize and adapt in varying, possibly unforeseen situations to assist humans in achieving complex tasks. Moreover, these systems will be required to operate in dynamic and challenging environments that impose certain constraints such as task formation and collaboration, ad-hoc resource availability, rapidly changing environmental conditions and the requirement to abide by mission objectives. Therefore, such systems require the capability to adapt and evolve so that they can behave autonomously at the edge of the network in new situations. Crucially, autonomous systems have to understand the bounds in which they can operate based on their own capability and the constraints of the environment. Policies are typically used by systems to define their behavior and constraints and often these policies are manually configured and managed by humans. AI-enabled systems will require novel approaches to rapidly learn, create, augment, and model emerging policies to support real-time decision making. Recent work has shown that such policy model generations are possible through symbolic learning to shallow and deep learning approaches for different classes of problems. Motivated by this observation, in this paper, we propose to apply recent advances in explainable-AI to develop an approach which is agnostic to the learning algorithm, thus enabling seamless policy generation in the coalition environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.