Measuring the Impact of AI in the Diagnosis of Hospitalized Patients: A Randomized Clinical Vignette Survey Study.


Journal

JAMA
ISSN: 1538-3598
Titre abrégé: JAMA
Pays: United States
ID NLM: 7501160

Informations de publication

Date de publication:
19 Dec 2023
Historique:
medline: 19 12 2023
pubmed: 19 12 2023
entrez: 19 12 2023
Statut: ppublish

Résumé

Artificial intelligence (AI) could support clinicians when diagnosing hospitalized patients; however, systematic bias in AI models could worsen clinician diagnostic accuracy. Recent regulatory guidance has called for AI models to include explanations to mitigate errors made by models, but the effectiveness of this strategy has not been established. To evaluate the impact of systematically biased AI on clinician diagnostic accuracy and to determine if image-based AI model explanations can mitigate model errors. Randomized clinical vignette survey study administered between April 2022 and January 2023 across 13 US states involving hospitalist physicians, nurse practitioners, and physician assistants. Clinicians were shown 9 clinical vignettes of patients hospitalized with acute respiratory failure, including their presenting symptoms, physical examination, laboratory results, and chest radiographs. Clinicians were then asked to determine the likelihood of pneumonia, heart failure, or chronic obstructive pulmonary disease as the underlying cause(s) of each patient's acute respiratory failure. To establish baseline diagnostic accuracy, clinicians were shown 2 vignettes without AI model input. Clinicians were then randomized to see 6 vignettes with AI model input with or without AI model explanations. Among these 6 vignettes, 3 vignettes included standard-model predictions, and 3 vignettes included systematically biased model predictions. Clinician diagnostic accuracy for pneumonia, heart failure, and chronic obstructive pulmonary disease. Median participant age was 34 years (IQR, 31-39) and 241 (57.7%) were female. Four hundred fifty-seven clinicians were randomized and completed at least 1 vignette, with 231 randomized to AI model predictions without explanations, and 226 randomized to AI model predictions with explanations. Clinicians' baseline diagnostic accuracy was 73.0% (95% CI, 68.3% to 77.8%) for the 3 diagnoses. When shown a standard AI model without explanations, clinician accuracy increased over baseline by 2.9 percentage points (95% CI, 0.5 to 5.2) and by 4.4 percentage points (95% CI, 2.0 to 6.9) when clinicians were also shown AI model explanations. Systematically biased AI model predictions decreased clinician accuracy by 11.3 percentage points (95% CI, 7.2 to 15.5) compared with baseline and providing biased AI model predictions with explanations decreased clinician accuracy by 9.1 percentage points (95% CI, 4.9 to 13.2) compared with baseline, representing a nonsignificant improvement of 2.3 percentage points (95% CI, -2.7 to 7.2) compared with the systematically biased AI model. Although standard AI models improve diagnostic accuracy, systematically biased AI models reduced diagnostic accuracy, and commonly used image-based AI model explanations did not mitigate this harmful effect. ClinicalTrials.gov Identifier: NCT06098950.

Identifiants

pubmed: 38112814
pii: 2812908
doi: 10.1001/jama.2023.22295
doi:

Banques de données

ClinicalTrials.gov
['NCT06098950']

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

2275-2284

Auteurs

Sarah Jabbour (S)

Computer Science and Engineering, University of Michigan, Ann Arbor.

David Fouhey (D)

Computer Science and Engineering, University of Michigan, Ann Arbor.
Now with Computer Science Courant Institute, New York University, New York.
Now with Electrical and Computer Engineering Tandon School of Engineering, New York University, New York.

Stephanie Shepard (S)

Computer Science and Engineering, University of Michigan, Ann Arbor.

Thomas S Valley (TS)

Pulmonary and Critical Care Medicine, Department of Internal Medicine, University of Michigan Medical School, Ann Arbor.

Ella A Kazerooni (EA)

Department of Radiology, University of Michigan Medical School, Ann Arbor.

Nikola Banovic (N)

Computer Science and Engineering, University of Michigan, Ann Arbor.

Jenna Wiens (J)

Computer Science and Engineering, University of Michigan, Ann Arbor.

Michael W Sjoding (MW)

Pulmonary and Critical Care Medicine, Department of Internal Medicine, University of Michigan Medical School, Ann Arbor.

Classifications MeSH