Poster + Presentation + Paper
12 April 2021 Explainable AI for medical imaging: explaining pneumothorax diagnoses with Bayesian teaching
Author Affiliations +
Conference Poster
Abstract
Limited expert time is a key bottleneck in medical imaging. Due to advances in image classification, AI can now serve as decision-support for medical experts, with the potential for great gains in radiologist productivity and, by extension, public health. However, these gains are contingent on building and maintaining experts’ trust in the AI agents. Explainable AI may build such trust by helping medical experts to understand the AI decision processes behind diagnostic judgements. Here we introduce and evaluate explanations based on Bayesian Teaching, a formal account of explanation rooted in the cognitive science of human learning. We find that medical experts exposed to explanations generated by Bayesian Teaching successfully predict the AI’s diagnostic decisions and are more likely to certify the AI for cases when the AI is correct than when it is wrong, indicating appropriate trust. These results show that Explainable AI can be used to support human-AI collaboration in medical imaging.
Conference Presentation
© (2021) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Tomas Folke, Scott Cheng-Hsin Yang, Sean Anderson, and Patrick Shafto "Explainable AI for medical imaging: explaining pneumothorax diagnoses with Bayesian teaching", Proc. SPIE 11746, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, 117462J (12 April 2021); https://doi.org/10.1117/12.2585967
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Artificial intelligence

X-rays

Diagnostics

Convolutional neural networks

Decision support systems

X-ray imaging

Back to Top