A QUEST for Model Assessment: Identifying Difficult Subgroups via Epistemic Uncertainty Quantification.


Journal

AMIA ... Annual Symposium proceedings. AMIA Symposium
ISSN: 1942-597X
Titre abrégé: AMIA Annu Symp Proc
Pays: United States
ID NLM: 101209213

Informations de publication

Date de publication:
2023
Historique:
medline: 15 1 2024
pubmed: 15 1 2024
entrez: 15 1 2024
Statut: epublish

Résumé

Uncertainty quantification in machine learning can provide powerful insight into a model's capabilities and enhance human trust in opaque models. Well-calibrated uncertainty quantification reveals a connection between high uncertainty and an increased likelihood of an incorrect classification. We hypothesize that if we are able to explain the model's uncertainty by generating rules that define subgroups of data with high and low levels of classification uncertainty, then those same rules will identify subgroups of data on which the model performs well and subgroups on which the model does not perform well. If true, then the utility of uncertainty quantification is not limited to understanding the certainty of individual predictions; it can also be used to provide a more global understanding of the model's understanding of patient subpopulations. We evaluate our proposed technique and hypotheses on deep neural networks and tree-based gradient boosting ensemble across benchmark and real-world medical datasets.

Identifiants

pubmed: 38222340
pii: 154
pmc: PMC10785870

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

854-863

Informations de copyright

©2023 AMIA - All rights reserved.

Auteurs

Katherine E Brown (KE)

Tennessee Technological University, Cookeville, TN.
Vanderbilt University Medical Center, Nashville, TN.

Steve Talbert (S)

University of Central Florida, Orlando, FL.

Douglas A Talbert (DA)

Tennessee Technological University, Cookeville, TN.

Classifications MeSH