Presentation + Paper
28 September 2023 nEGXAI: a negation-based explainable AI through feature learning in Fourier domain
Author Affiliations +
Abstract
The explainable artificial intelligence (AI) techniques can help us explain the predictive behavior of a machine learning model without using the information of the model architecture, model parameters, or the training strategies but using the human (users) interpretable features and the predictions. This paper presents a technique, namely nEGXAI, to develop explainable AI models. The current explainable AI approaches are generally built on the affirmation-based logic. In other words, when a model predicts a class label, the explainable AI may explain the prediction results by the features that are associated with the predicted class and interpretable by the users. In contrast to the use of affirmation-based logic, the proposed approach uses the negation-based mathematical logic. In other words, when a model predicts a class label, the proposed nEGXAI can explain why the predicted class is not one of the other classes by using the features that are not associated with the predicted class. The proposed nEGXAI is also a model-agnostic technique in the sense that it does not assume any knowledge of model-related information but uses only the features from the data and the predictions, the same way the Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) techniques use. The nEGXAI utilizes the mathematical concept of constructive negation and its effect on the spectral density function of the features in the Fourier domain. It allows nEGXAI to capture the possible prediction failures that are human interpretable and explainable; hence, it delivers a trustworthy explainable AI for decision making.
Conference Presentation
(2023) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Shan Suthaharan "nEGXAI: a negation-based explainable AI through feature learning in Fourier domain", Proc. SPIE 12655, Emerging Topics in Artificial Intelligence (ETAI) 2023, 126550E (28 September 2023); https://doi.org/10.1117/12.2682004
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Education and training

Artificial intelligence

Data modeling

Spectral density

Logic

Simulations

Machine learning

Back to Top