Performance of Language Models on the Family Medicine In-Training Exam.


Journal

Family medicine
ISSN: 1938-3800
Titre abrégé: Fam Med
Pays: United States
ID NLM: 8306464

Informations de publication

Date de publication:
12 Aug 2024
Historique:
medline: 31 8 2024
pubmed: 31 8 2024
entrez: 29 8 2024
Statut: aheadofprint

Résumé

Artificial intelligence (AI), such as ChatGPT and Bard, has gained popularity as a tool in medical education. The use of AI in family medicine has not yet been assessed. The objective of this study is to compare the performance of three large language models (LLMs; ChatGPT 3.5, ChatGPT 4.0, and Google Bard) on the family medicine in-training exam (ITE). The 193 multiple-choice questions of the 2022 ITE, written by the American Board of Family Medicine, were inputted in ChatGPT 3.5, ChatGPT 4.0, and Bard. The LLMs' performance was then scored and scaled. ChatGPT 4.0 scored 167/193 (86.5%) with a scaled score of 730 out of 800. According to the Bayesian score predictor, ChatGPT 4.0 has a 100% chance of passing the family medicine board exam. ChatGPT 3.5 scored 66.3%, translating to a scaled score of 400 and an 88% chance of passing the family medicine board exam. Bard scored 64.2%, with a scaled score of 380 and an 85% chance of passing the boards. Compared to the national average of postgraduate year 3 residents, only ChatGPT 4.0 surpassed the residents' mean of 68.4%. ChatGPT 4.0 was the only LLM that outperformed the family medicine postgraduate year 3 residents' national averages on the 2022 ITE, providing robust explanations and demonstrating its potential use in delivering background information on common medical concepts that appear on board exams.

Sections du résumé

BACKGROUND AND OBJECTIVES OBJECTIVE
Artificial intelligence (AI), such as ChatGPT and Bard, has gained popularity as a tool in medical education. The use of AI in family medicine has not yet been assessed. The objective of this study is to compare the performance of three large language models (LLMs; ChatGPT 3.5, ChatGPT 4.0, and Google Bard) on the family medicine in-training exam (ITE).
METHODS METHODS
The 193 multiple-choice questions of the 2022 ITE, written by the American Board of Family Medicine, were inputted in ChatGPT 3.5, ChatGPT 4.0, and Bard. The LLMs' performance was then scored and scaled.
RESULTS RESULTS
ChatGPT 4.0 scored 167/193 (86.5%) with a scaled score of 730 out of 800. According to the Bayesian score predictor, ChatGPT 4.0 has a 100% chance of passing the family medicine board exam. ChatGPT 3.5 scored 66.3%, translating to a scaled score of 400 and an 88% chance of passing the family medicine board exam. Bard scored 64.2%, with a scaled score of 380 and an 85% chance of passing the boards. Compared to the national average of postgraduate year 3 residents, only ChatGPT 4.0 surpassed the residents' mean of 68.4%.
CONCLUSIONS CONCLUSIONS
ChatGPT 4.0 was the only LLM that outperformed the family medicine postgraduate year 3 residents' national averages on the 2022 ITE, providing robust explanations and demonstrating its potential use in delivering background information on common medical concepts that appear on board exams.

Identifiants

pubmed: 39207788
doi: 10.22454/FamMed.2024.233738
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Auteurs

Rana E Hanna (RE)

Morsani College of Medicine, University of South Florida, Tampa, FL.

Logan R Smith (LR)

Morsani College of Medicine, University of South Florida, Tampa, FL.

Rahul Mhaskar (R)

Department of Medical Education, Morsani College of Medicine, University of South Florida, Tampa, FL.

Karim Hanna (K)

Morsani College of Medicine, University of South Florida, Tampa, FL.
Department of Family Medicine, Morsani College of Medicine, University of South Florida, Tampa, FL.

Classifications MeSH