Presentation + Paper
7 June 2024 SAR ATR analysis and implications for learning
Author Affiliations +
Abstract
Deep neural networks for automatic target recognition (ATR) have been shown to be highly successful for a large variety of Synthetic Aperture Radar (SAR) benchmark datasets. However, the black box nature of neural network approaches raises concerns about how models come to their decisions, especially when in high-stake scenarios. Accordingly, a variety of techniques are being pursued seeking to offer understanding of machine learning algorithms. In this paper, we first provide an overview of explainability and interpretability techniques introducing their concepts and the insights they produce. Next we summarize several methods for computing specific approaches to explainability and interpretability as well as analyzing their outputs. Finally, we demonstrate the application of several attribution map methods and apply both attribution analysis metrics as well as localization interpretability analysis to six neural network models trained on the Synthetic and Measured Paired Labeled Experiment (SAMPLE) dataset to illustrate the insights these methods offer for analyzing SAR ATR performance.
Conference Presentation
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Johannes Bauer, Efrain H. Gonzalez, William M. Severa, and Craig M. Vineyard "SAR ATR analysis and implications for learning", Proc. SPIE 13039, Automatic Target Recognition XXXIV, 130390L (7 June 2024); https://doi.org/10.1117/12.3014042
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Synthetic aperture radar

Data modeling

Education and training

Shadows

Image segmentation

Automatic target recognition

Back to Top