Why did AI get this one wrong? - Tree-based explanations of machine learning model predictions.

Black-box Explainable Explanation Fidelity Interpretable Local explanation Model agnostic Post-hoc Reliability Surrogate model XAI

Journal

Artificial intelligence in medicine
ISSN: 1873-2860
Titre abrégé: Artif Intell Med
Pays: Netherlands
ID NLM: 8915031

Informations de publication

Date de publication:
01 2023
Historique:
received: 26 01 2022
revised: 25 11 2022
accepted: 28 11 2022
entrez: 11 1 2023
pubmed: 12 1 2023
medline: 13 1 2023
Statut: ppublish

Résumé

Increasingly complex learning methods such as boosting, bagging and deep learning have made ML models more accurate, but harder to interpret and explain, culminating in black-box machine learning models. Model developers and users alike are often presented with a trade-off between performance and intelligibility, especially in high-stakes applications like medicine. In the present article we propose a novel methodological approach for generating explanations for the predictions of a generic machine learning model, given a specific instance for which the prediction has been made. The method, named AraucanaXAI, is based on surrogate, locally-fitted classification and regression trees that are used to provide post-hoc explanations of the prediction of a generic machine learning model. Advantages of the proposed XAI approach include superior fidelity to the original model, ability to deal with non-linear decision boundaries, and native support to both classification and regression problems. We provide a packaged, open-source implementation of the AraucanaXAI method and evaluate its behaviour in a number of different settings that are commonly encountered in medical applications of AI. These include potential disagreement between the model prediction and physician's expert opinion and low reliability of the prediction due to data scarcity.

Identifiants

pubmed: 36628785
pii: S0933-3657(22)00223-8
doi: 10.1016/j.artmed.2022.102471
pii:
doi:

Types de publication

Journal Article Research Support, Non-U.S. Gov't

Langues

eng

Sous-ensembles de citation

IM

Pagination

102471

Informations de copyright

Copyright © 2022 The Authors. Published by Elsevier B.V. All rights reserved.

Déclaration de conflit d'intérêts

Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Auteurs

Enea Parimbelli (E)

Department of Electric, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy; Telfer school of Management, University of Ottawa, Ottawa, Ontario, Canada. Electronic address: enea.parimbelli@gmail.com.

Tommaso Mario Buonocore (TM)

Department of Electric, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy.

Giovanna Nicora (G)

Department of Electric, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy; enGenome srl, Pavia, Italy.

Wojtek Michalowski (W)

Telfer school of Management, University of Ottawa, Ottawa, Ontario, Canada.

Szymon Wilk (S)

Division of Intelligent Decision Support Systems, Institute of Computing Science, Poznan University of Technology, Poznan, Poland.

Riccardo Bellazzi (R)

Department of Electric, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy.

Articles similaires

Exploring blood-brain barrier passage using atomic weighted vector and machine learning.

Yoan Martínez-López, Paulina Phoobane, Yanaima Jauriga et al.
1.00
Blood-Brain Barrier Machine Learning Humans Support Vector Machine Software
Humans Middle Aged Female Male Surveys and Questionnaires

Understanding the role of machine learning in predicting progression of osteoarthritis.

Simone Castagno, Benjamin Gompels, Estelle Strangmark et al.
1.00
Humans Disease Progression Machine Learning Osteoarthritis

Classifications MeSH