Orchestrating explainable artificial intelligence for multimodal and longitudinal data in medical imaging.


Journal

NPJ digital medicine
ISSN: 2398-6352
Titre abrégé: NPJ Digit Med
Pays: England
ID NLM: 101731738

Informations de publication

Date de publication:
22 Jul 2024
Historique:
received: 16 09 2023
accepted: 15 07 2024
medline: 23 7 2024
pubmed: 23 7 2024
entrez: 22 7 2024
Statut: epublish

Résumé

Explainable artificial intelligence (XAI) has experienced a vast increase in recognition over the last few years. While the technical developments are manifold, less focus has been placed on the clinical applicability and usability of systems. Moreover, not much attention has been given to XAI systems that can handle multimodal and longitudinal data, which we postulate are important features in many clinical workflows. In this study, we review, from a clinical perspective, the current state of XAI for multimodal and longitudinal datasets and highlight the challenges thereof. Additionally, we propose the XAI orchestrator, an instance that aims to help clinicians with the synopsis of multimodal and longitudinal data, the resulting AI predictions, and the corresponding explainability output. We propose several desirable properties of the XAI orchestrator, such as being adaptive, hierarchical, interactive, and uncertainty-aware.

Identifiants

pubmed: 39039248
doi: 10.1038/s41746-024-01190-w
pii: 10.1038/s41746-024-01190-w
doi:

Types de publication

Journal Article Review

Langues

eng

Pagination

195

Subventions

Organisme : Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (Swiss National Science Foundation)
ID : 205320_212939
Organisme : Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (Swiss National Science Foundation)
ID : 205320_212939
Organisme : Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (Swiss National Science Foundation)
ID : 205320_212939
Organisme : Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (Swiss National Science Foundation)
ID : 205320_212939

Informations de copyright

© 2024. The Author(s).

Références

Albahri, A. S. et al. A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Inf. Fusion 96, 156–191 (2023).
doi: 10.1016/j.inffus.2023.03.008
Tjoa, E. & Guan, C. A survey on explainable artificial intelligence (XAI): toward medical XAI. IEEE Trans. Neural Netw. Learn. Syst. 32, 4793–4813 (2021).
pubmed: 33079674 doi: 10.1109/TNNLS.2020.3027314
van Lent, M., Fisher, W. & Mancuso, M. An explainable artificial intelligence system for small-unit tactical behavior. IAAI Emerging Applications. 900-907 (2004)
Graziani, M. et al. A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences. Artif. Intell. Rev. 56, 3473–3504 (2023).
pubmed: 36092822 doi: 10.1007/s10462-022-10256-8
Reyes, M. et al. On the interpretability of artificial intelligence in radiology: challenges and opportunities. Radiol. Artif. Intell. 2, e190043 (2020).
pubmed: 32510054 pmcid: 7259808 doi: 10.1148/ryai.2020190043
Lipkova, J. et al. Artificial intelligence for multimodal data integration in oncology. Cancer Cell 40, 1095–1110 (2022).
pubmed: 36220072 pmcid: 10655164 doi: 10.1016/j.ccell.2022.09.012
Boehm, K. M., Khosravi, P., Vanguri, R., Gao, J. & Shah, S. P. Harnessing multimodal data integration to advance precision oncology. Nat. Rev. Cancer 22, 114–126 (2022).
pubmed: 34663944 doi: 10.1038/s41568-021-00408-3
Acosta, J. N., Falcone, G. J., Rajpurkar, P. & Topol, E. J. Multimodal biomedical AI. Nat. Med. 28, 1773–1784 (2022).
pubmed: 36109635 doi: 10.1038/s41591-022-01981-2
Boonn, W. W. & Langlotz, C. P. Radiologist use of and perceived need for patient data access. J. Digit. Imaging 22, 357–362 (2009).
pubmed: 18459002 doi: 10.1007/s10278-008-9115-2
Huang, S.-C., Pareek, A., Seyyedi, S., Banerjee, I. & Lungren, M. P. Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. Npj Digit. Med. 3, 1–9 (2020).
doi: 10.1038/s41746-020-00341-z
Troyanskaya, O. et al. Artificial intelligence and cancer. Nat. Cancer 1, 149–152 (2020).
pubmed: 35122011 doi: 10.1038/s43018-020-0034-6
Bi, W. L. et al. Artificial intelligence in cancer imaging: clinical challenges and applications. CA Cancer J. Clin. 69, 127–157 (2019).
pubmed: 30720861 pmcid: 6403009 doi: 10.3322/caac.21552
Heiliger, L., Sekuboyina, A., Menze, B., Egger, J. & Kleesiek, J. Beyond medical imaging: a review of multimodal deep learning in radiology. https://www.zora.uzh.ch/id/eprint/219067/ (2022).
Steyaert, S. et al. Multimodal data fusion for cancer biomarker discovery with deep learning. Nat. Mach. Intell. 5, 351–362 (2023).
pubmed: 37693852 pmcid: 10484010 doi: 10.1038/s42256-023-00633-5
Taleb, A., Kirchler, M., Monti, R. & Lippert, C. ContIG: self-supervised multimodal contrastive learning for medical imaging with genetics. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 20876–20889. https://doi.org/10.1109/CVPR52688.2022.02024 (2022).
Soenksen, L. R. et al. Integrated multimodal artificial intelligence framework for healthcare applications. Npj Digit. Med. 5, 1–10 (2022).
doi: 10.1038/s41746-022-00689-4
Joshi, G., Walambe, R. & Kotecha, K. A review on explainability in multimodal deep neural nets. IEEE Access 9, 59800–59821 (2021).
doi: 10.1109/ACCESS.2021.3070212
Venkadesh, K. V. et al. Prior CT improves deep learning for malignancy risk estimation of screening-detected pulmonary nodules. Radiology 308, e223308 (2023).
pubmed: 37526548 doi: 10.1148/radiol.223308
Rojat, T. et al. Explainable artificial intelligence (XAI) on TimeSeries data: a survey. Preprint at http://arxiv.org/abs/2104.00950 (2021).
Baltrušaitis, T., Ahuja, C. & Morency, L.-P. Multimodal machine learning: a survey and taxonomy. IEEE Trans. Pattern Anal. Mach. Intell. 41, 423–443 (2019).
pubmed: 29994351 doi: 10.1109/TPAMI.2018.2798607
Yala, A., Lehman, C., Schuster, T., Portnoi, T. & Barzilay, R. A deep learning mammography-based model for improved breast cancer risk prediction. Radiology 292, 60–66 (2019).
pubmed: 31063083 doi: 10.1148/radiol.2019182716
Joo, S. et al. Multimodal deep learning models for the prediction of pathologic response to neoadjuvant chemotherapy in breast cancer. Sci. Rep. 11, 18800 (2021).
pubmed: 34552163 pmcid: 8458289 doi: 10.1038/s41598-021-98408-8
Reda, I. et al. Deep learning role in early diagnosis of prostate cancer. Technol. Cancer Res. Treat. 17, 1533034618775530 (2018).
pubmed: 29804518 pmcid: 5972199 doi: 10.1177/1533034618775530
Hyun, S. H., Ahn, M. S., Koh, Y. W. & Lee, S. J. A machine-learning approach using PET-based radiomics to predict the histological subtypes of lung cancer. Clin. Nucl. Med. 44, 956 (2019).
pubmed: 31689276 doi: 10.1097/RLU.0000000000002810
Liu, J. et al. Prediction of rupture risk in anterior communicating artery aneurysms with a feed-forward artificial neural network. Eur. Radiol. 28, 3268–3275 (2018).
pubmed: 29476219 doi: 10.1007/s00330-017-5300-3
Yoo, Y. et al. Deep learning of brain lesion patterns and user-defined clinical and MRI features for predicting conversion to multiple sclerosis from clinically isolated syndrome. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 7, 250–259 (2019).
doi: 10.1080/21681163.2017.1356750
Mueller, S. G. et al. The Alzheimer’s disease neuroimaging initiative. Neuroimaging Clin. N. Am. 15, 869–877 (2005).
pubmed: 16443497 pmcid: 2376747 doi: 10.1016/j.nic.2005.09.008
Thung, K.-H., Yap, P.-T. & Shen, D. Multi-stage diagnosis of alzheimer’s disease with incomplete multimodal data via multi-task deep learning. Deep Learn. Med. Image Anal. Multimodal Learn. Clin. Decis. Support 10553, 160–168 (2017).
doi: 10.1007/978-3-319-67558-9_19
Bhagwat, N., Viviano, J. D., Voineskos, A. N. & Chakravarty, M. M. Modeling and prediction of clinical symptom trajectories in Alzheimer’s disease using longitudinal data. PLOS Comput. Biol. 14, e1006376 (2018).
pubmed: 30216352 pmcid: 6157905 doi: 10.1371/journal.pcbi.1006376
Li, H. & Fan, Y. Early prediction of Alzheimer’s disease dementia based on baseline hippocampal MRI and 1-year follow-up cognitive measures using deep recurrent neural networks. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019) 368–371. https://doi.org/10.1109/ISBI.2019.8759397 (2019).
Spasov, S. E., Passamonti, L., Duggento, A., Liò, P. & Toschi, N. A multi-modal convolutional neural network framework for the prediction of Alzheimer’s disease. In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 1271–1274. https://doi.org/10.1109/EMBC.2018.8512468 (2018).
Qiu, S. et al. Fusion of deep learning models of MRI scans, Mini–Mental State Examination, and logical memory test enhances diagnosis of mild cognitive impairment. Alzheimers Dement. Diagn. Assess. Dis. Monit. 10, 737–749 (2018).
Sheng, J. et al. Predictive classification of Alzheimer’s disease using brain imaging and genetic data. Sci. Rep. 12, 2405 (2022).
pubmed: 35165327 pmcid: 8844076 doi: 10.1038/s41598-022-06444-9
Cao, R. et al. Development and interpretation of a pathomics-based model for the prediction of microsatellite instability in Colorectal Cancer. Theranostics 10, 11080–11091 (2020).
pubmed: 33042271 pmcid: 7532670 doi: 10.7150/thno.49864
Jurenaite, N., León-Periñán, D., Donath, V., Torge, S. & Jäkel, R. SetQuence & SetOmic: deep set transformer-based representations of cancer multi-omics. In: 2022 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB) 1–9. https://doi.org/10.1109/CIBCB55180.2022.9863058 (2022).
Prelaj, A. et al. Real-world data to build explainable trustworthy artificial intelligence models for prediction of immunotherapy efficacy in NSCLC patients. Front. Oncol. 12 (2023).
Arya, V. et al. One explanation does not fit all: a toolkit and taxonomy of ai explainability techniques. Preprint at https://doi.org/10.48550/arXiv.1909.03012 (2019).
Klaise, Janis, J., Van Looveren, A., Vacanti, G. & Coca, A. Alibi explain: algorithms for explaining machine learning models. JMLR. 22, 1–7 (2021).
Kokhlikyan, N. et al. Captum: a unified and generic model interpretability library for PyTorch. Preprint at https://doi.org/10.48550/arXiv.2009.07896 (2020).
The Institute for Ethical Machine Learning. XAI - An eXplainability toolbox for machine learning. https://github.com/EthicalML/xai (2023)
Alber, M. et al. iNNvestigate neural networks! JMLR 20, 1–8 (2019).
Hedström, A. et al. Quantus: an explainable AI toolkit for responsible evaluation of neural network explanations and beyond. JMLR 24, 1–11 (2023).
Di Martino, F. & Delmastro, F. Explainable AI for clinical and remote health applications: a survey on tabular and time series data. Artif. Intell. Rev. 56, 5261–5315 (2023).
pubmed: 36320613 doi: 10.1007/s10462-022-10304-3
Reel, P. S., Reel, S., Pearson, E., Trucco, E. & Jefferson, E. Using machine learning approaches for multi-omics data analysis: a review. Biotechnol. Adv. 49, 107739 (2021).
pubmed: 33794304 doi: 10.1016/j.biotechadv.2021.107739
Berisha, V. et al. Digital medicine and the curse of dimensionality. Npj Digit. Med. 4, 1–8 (2021).
doi: 10.1038/s41746-021-00521-5
Ben Ahmed, K., Hall, L. O., Goldgof, D. B. & Fogarty, R. Achieving multisite generalization for CNN-based disease diagnosis models by mitigating shortcut learning. IEEE Access 10, 78726–78738 (2022).
doi: 10.1109/ACCESS.2022.3193700
Gichoya, J. W. et al. AI recognition of patient race in medical imaging: a modelling study. Lancet Digit. Health 4, e406–e414 (2022).
Geirhos, R. et al. Shortcut learning in deep neural networks. Nat. Mach. Intell. 2, 665–673 (2020).
doi: 10.1038/s42256-020-00257-z
Yu, Y., Lee, H. J., Kim, B. C., Kim, J. U. & Ro, Y. M. Investigating vulnerability to adversarial examples on multimodal data fusion in deep learning. Preprint at https://doi.org/10.48550/arXiv.2005.10987 (2020).
Simon-Gabriel, C.-J., Ollivier, Y., Bottou, L., Schölkopf, B. & Lopez-Paz, D. First-order adversarial vulnerability of neural networks and input dimension. Proceedings of the 36th International Conference on Machine Learning. PMLR. 97, 5809–5817 (2019).
Chen, J., Jia, C., Zheng, H., Chen, R. & Fu, C. Is multi-modal necessarily better? robustness evaluation of multi-modal fake news detection. IEEE Trans. Netw. Sci. Eng. 1–15 https://doi.org/10.1109/TNSE.2023.3249290 (2023).
Shaik, T., Tao, X., Li, L., Xie, H. & Velásquez, J. D. Multimodality fusion for smart healthcare: a journey from data, information, knowledge to wisdom. Preprint at http://arxiv.org/abs/2306.11963 (2023).
Rahim, N. et al. Prediction of Alzheimer’s progression based on multimodal deep-Learning-based fusion and visual Explainability of time-series data. Inf. Fusion 92, 363–388 (2023).
doi: 10.1016/j.inffus.2022.11.028
Anguita-Ruiz, A., Segura-Delgado, A., Alcalá, R., Aguilera, C. M. & Alcalá-Fdez, J. eXplainable Artificial Intelligence (XAI) for the identification of biologically relevant gene expression patterns in longitudinal human studies, insights from obesity research. PLOS Comput. Biol. 16, e1007792 (2020).
pubmed: 32275707 pmcid: 7176286 doi: 10.1371/journal.pcbi.1007792
Shashikumar, S. P., Josef, C. S., Sharma, A. & Nemati, S. DeepAISE—an interpretable and recurrent neural survival model for early prediction of sepsis. Artif. Intell. Med. 113, 102036 (2021).
pubmed: 33685592 pmcid: 8029104 doi: 10.1016/j.artmed.2021.102036
Ibrahim, L., Mesinovic, M., Yang, K.-W. & Eid, M. A. Explainable prediction of acute myocardial infarction using machine learning and shapley values. IEEE Access 8, 210410–210417 (2020).
doi: 10.1109/ACCESS.2020.3040166
Vielhaben, J., Lapuschkin, S., Montavon, G. & Samek, W. Explainable AI for time series via virtual inspection layers. Pattern Recognit. 150, 110309 (2024).
Sandoval, Y. et al. High-sensitivity cardiac troponin and the 2021 AHA/ACC/ASE/CHEST/SAEM/SCCT/SCMR guidelines for the evaluation and diagnosis of acute chest pain. Circulation 146, 569–581 (2022).
pubmed: 35775423 doi: 10.1161/CIRCULATIONAHA.122.059678
Sallam, M. The utility of ChatGPT as an example of large language models in healthcare education, research and practice: systematic review on the future perspectives and potential limitations. https://doi.org/10.1101/2023.02.19.23286155 (2023).
Lee, J. et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36, 1234–1240 (2020).
pubmed: 31501885 doi: 10.1093/bioinformatics/btz682
Rasmy, L., Xiang, Y., Xie, Z., Tao, C. & Zhi, D. Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction. Npj Digit. Med. 4, 1–13 (2021).
doi: 10.1038/s41746-021-00455-y
Wang, S., Zhao, Z., Ouyang, X., Wang, Q. & Shen, D. ChatCAD: interactive computer-aided diagnosis on medical image using large language models. Preprint at https://doi.org/10.48550/arXiv.2302.07257 (2023).
Huang, S.-C., Shen, L., Lungren, M. P. & Yeung, S. GLoRIA: a multimodal global-local representation learning framework for label-efficient medical image recognition. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV) 3922–3931. https://doi.org/10.1109/ICCV48922.2021.00391 (2021).
Wang, Z., Wu, Z., Agarwal, D. & Sun, J. MedCLIP: contrastive learning from unpaired medical images and text. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 3876–3887 (2022).
OpenAI Platform. https://platform.openai.com (2023).
Wu, C. et al. Can GPT-4V(ision) Serve medical applications? Case studies on GPT-4V for multimodal medical diagnosis. Preprint at http://arxiv.org/abs/2310.09909 (2023).
Bienefeld, N. et al. Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals. Npj Digit. Med. 6, 1–7 (2023).
doi: 10.1038/s41746-023-00837-4
Berrevoets, J., Kacprzyk, K., Qian, Z. & van der Schaar, M. Causal deep learning. Preprint at https://doi.org/10.48550/arXiv.2303.02186 (2023).
Ribeiro, F. D. S., Xia, T., Monteiro, M., Pawlowski, N. & Glocker, B. High fidelity image counterfactuals with probabilistic causal models. Proceedings of the 40th International Conference on Machine Learning. PMLR202. (2023).
Castro, D. C., Walker, I. & Glocker, B. Causality matters in medical imaging. Nat. Commun. 11, 3673 (2020).
pubmed: 32699250 pmcid: 7376027 doi: 10.1038/s41467-020-17478-w
Yue, K., Jin, R., Wong, C.-W., Baron, D. & Dai, H. Gradient obfuscation gives a false sense of security in federated learning. Preprint at https://doi.org/10.48550/arXiv.2206.04055 (2022).
Mo, F. et al. Quantifying and localizing usable information leakage from neural network gradients. Preprint at https://doi.org/10.48550/arXiv.2105.13929 (2022).
Mujawar, S., Deshpande, A., Gherkar, A., Simon, S. E. & Prajapati, B. in Human-Machine Interface 1–23 (John Wiley & Sons, Ltd, 2023). https://doi.org/10.1002/9781394200344.ch1 .
Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J. & Fernández-Leal, Á. Human-in-the-loop machine learning: a state of the art. Artif. Intell. Rev. 56, 3005–3054 (2023).
doi: 10.1007/s10462-022-10246-w
Parcalabescu, L. & Frank, A. On measuring faithfulness of natural language explanations. Preprint at https://doi.org/10.48550/arXiv.2311.07466 (2023).
Wu, C., Zhang, X., Zhang, Y., Wang, Y. & Xie, W. MedKLIP: medical knowledge enhanced language-image pre-training for X-ray Diagnosis. IEEE/CVF International Conference on Computer Vision (ICCV). 21315–21326 (2023).
Filice, R. W. & Ratwani, R. M. The case for user-centered artificial intelligence in radiology. Radiol. Artif. Intell. 2, e190095 (2020).
pubmed: 33937824 pmcid: 8082296 doi: 10.1148/ryai.2020190095
Ejaz, H. et al. Artificial intelligence and medical education: a global mixed-methods study of medical students’ perspectives. Digit. Health 8, 20552076221089099 (2022).
pubmed: 35521511 pmcid: 9067043
Agrawal, A. et al. A survey of ASER members on artificial intelligence in emergency radiology: trends, perceptions, and expectations. Emerg. Radiol. 30, 267–277 (2023).
Huisman, M. et al. An international survey on AI in radiology in 1,041 radiologists and radiology residents part 1: fear of replacement, knowledge, and attitude. Eur. Radiol. 31, 7058–7066 (2021).
pubmed: 33744991 pmcid: 8379099 doi: 10.1007/s00330-021-07781-5
Huisman, M. et al. An international survey on AI in radiology in 1041 radiologists and radiology residents part 2: expectations, hurdles to implementation, and education. Eur. Radiol. 31, 8797–8806 (2021).
pubmed: 33974148 pmcid: 8111651 doi: 10.1007/s00330-021-07782-4
van Hoek, J. et al. A survey on the future of radiology among radiologists, medical students and surgeons: Students and surgeons tend to be more skeptical about artificial intelligence and radiologists may fear that other disciplines take over. Eur. J. Radiol. 121, 108742 (2019).
pubmed: 31734640 doi: 10.1016/j.ejrad.2019.108742
Codari, M. et al. Impact of artificial intelligence on radiology: a EuroAIM survey among members of the European Society of Radiology. Insights Imaging 10, 105 (2019).
doi: 10.1186/s13244-019-0798-3
Keeney, S., Hasson, F. & McKenna, H. P. A critical review of the Delphi technique as a research methodology for nursing. Int. J. Nurs. Stud. 38, 195–200 (2001).
pubmed: 11223060 doi: 10.1016/S0020-7489(00)00044-4
Schotman, E. & Iren, D. Algorithmic decision making and model explainability preferences in the insurance industry: a Delphi study. In: 2022 IEEE 24th Conference on Business Informatics (CBI) 01 235–242 (IEEE, 2022).
Mittelstadt, B., Russell, C. & Wachter, S. Explaining explanations in AI. In: (ed) IEEE staff Proceedings of the Conference on Fairness, Accountability, and Transparency 279–288. https://doi.org/10.1145/3287560.3287574 (2019).
Ates, E., Aksar, B., Leung, V. J. & Coskun, A. K. Counterfactual explanations for multivariate time series. In: 2021 International Conference on Applied Artificial Intelligence (ICAPAI) 1–8. https://doi.org/10.1109/ICAPAI49758.2021.9462056 (2021).
Siddiqui, S. A., Mercier, D., Munir, M., Dengel, A. & Ahmed, S. TSViz: demystification of deep learning models for time-series analysis. IEEE Access 7, 67027–67040 (2019).
doi: 10.1109/ACCESS.2019.2912823
Küsters, F., Schichtel, P., Ahmed, S. & Dengel, A. Conceptual explanations of neural network prediction for time series. In: 2020 International Joint Conference on Neural Networks (IJCNN) 1–6. https://doi.org/10.1109/IJCNN48605.2020.9207341 (2020).
Guidotti, R., Monreale, A., Spinnato, F., Pedreschi, D. & Giannotti, F. Explaining any time series classifier. In: 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI) 167–176. https://doi.org/10.1109/CogMI50398.2020.00029 (2020).
Binder, A. et al. Shortcomings of top-down randomization-based sanity checks for evaluations of deep neural network explanations. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 16143–16152 (2023).
Baniecki, H., Kretowicz, W., Piatyszek, P., Wisniewski, J. & Biecek, P. dalex: responsible machine learning with interactive explainability and fairness in Python. JMLR 22, 1–7 (2021).
H2O.ai. https://github.com/h2oai (2023).
Li, X. et al. InterpretDL: explaining deep models in PaddlePaddle. JMLR 23, 1–6 (2022).
People+AI Research (PAIR) Initiative. Saliency Library. PAIR code. https://github.com/PAIR-code/saliency (2023).
Ancelin, M., Anne, E., Cavy, B. & Desmier, F. shapash. https://github.com/MAIF/shapash , (2023).
Meudec, R. tf-explain. https://doi.org/10.5281/zenodo.5711704 (2021).
Fernandez, F.-G. TorchCAM: class activation explorer. https://github.com/frgfm/torch-cam (2023).
Fong, R., Patrick, M. & Vedaldi, A. Understanding deep networks via extremal perturbations and smooth masks. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2950–2958 (2019).
Krakowczyk, D. et al. Zennit. https://github.com/chr5tphr/zennit (2023).

Auteurs

Aurélie Pahud de Mortanges (A)

ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland. aurelie.pahuddemortanges@unibe.ch.

Haozhe Luo (H)

ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.

Shelley Zixin Shu (SZ)

ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.

Amith Kamath (A)

ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.

Yannick Suter (Y)

ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.
Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.

Mohamed Shelan (M)

Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.

Alexander Pöllinger (A)

Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, Bern, Switzerland.

Mauricio Reyes (M)

ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.
Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.

Classifications MeSH