Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features

Khakzar A, Zhang Y, Mansour W, Cai Y, Li Y, Zhang Y, Kim ST, Navab N (2021)


Publication Type: Conference contribution

Publication year: 2021

Journal

Publisher: Springer Science and Business Media Deutschland GmbH

Book Volume: 12903 LNCS

Pages Range: 391-401

Conference Proceedings Title: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Event location: Virtual, Online

ISBN: 9783030871987

DOI: 10.1007/978-3-030-87199-4_37

Abstract

Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays. In order to establish trust in the clinical routine, the networks’ prediction mechanism needs to be interpretable. One principal approach to interpretation is feature attribution. Feature attribution methods identify the importance of input features for the output prediction. Building on Information Bottleneck Attribution (IBA) method, for each prediction we identify the chest X-ray regions that have high mutual information with the network’s output. Original IBA identifies input regions that have sufficient predictive information. We propose Inverse IBA to identify all informative regions. Thus all predictive cues for pathologies are highlighted on the X-rays, a desirable property for chest X-ray diagnosis. Moreover, we propose Regression IBA for explaining regression models. Using Regression IBA we observe that a model trained on cumulative severity score labels implicitly learns the severity of different X-ray regions. Finally, we propose Multi-layer IBA to generate higher resolution and more detailed attribution/saliency maps. We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-agnostic feature importance metrics on NIH Chest X-ray8 and BrixIA datasets. The code (https://github.com/CAMP-eXplain-AI/CheXplain-IBA ) is publicly available.

Involved external institutions

How to cite

APA:

Khakzar, A., Zhang, Y., Mansour, W., Cai, Y., Li, Y., Zhang, Y.,... Navab, N. (2021). Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features. In Marleen de Bruijne, Philippe C. Cattin, Stéphane Cotin, Nicolas Padoy, Stefanie Speidel, Yefeng Zheng, Caroline Essert (Eds.), Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (pp. 391-401). Virtual, Online: Springer Science and Business Media Deutschland GmbH.

MLA:

Khakzar, Ashkan, et al. "Explaining COVID-19 and Thoracic Pathology Model Predictions by Identifying Informative Input Features." Proceedings of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021, Virtual, Online Ed. Marleen de Bruijne, Philippe C. Cattin, Stéphane Cotin, Nicolas Padoy, Stefanie Speidel, Yefeng Zheng, Caroline Essert, Springer Science and Business Media Deutschland GmbH, 2021. 391-401.

BibTeX: Download