Presentation + Paper
12 April 2021 Coalition situational understanding via explainable neuro-symbolic reasoning and learning
Author Affiliations +
Abstract
Recent years have seen significant advances in artificial intelligence (AI) and machine learning (ML) technologies applicable to coalition situational understanding (CSU). However, state-of-the-art ML techniques based on deep neural networks require large volumes of training data; unfortunately, representative training examples of situations of interest in CSU are usually sparse. Moreover, to be useful, ML-based analytic services must be capable of explaining their outputs. We describe an integrated CSU architecture that combines neural networks with symbolic learning and reasoning to address the problem of sparse training data. We also demonstrate how explainability can be achieved for deep neural networks operating on multimodal sensor feeds. The work focuses on real-time decision making settings at the tactical edge, with both the symbolic and neural network parts of the system --- including the explainabilty approaches --- able to deal with temporal features.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Alun D. Preece, Dave Braines, Federico Cerutti, Jack Furby, Liam Hiley, Lance Kaplan, Mark Law, Alessandra Russo, Mani Srivastava, Marc Roig Vilamala, and Tianwei Xing "Coalition situational understanding via explainable neuro-symbolic reasoning and learning", Proc. SPIE 11746, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, 117461X (12 April 2021); https://doi.org/10.1117/12.2587850
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Artificial intelligence

Sensors

Machine learning

RELATED CONTENT


Back to Top