Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4.

Decision-making Ethics- Medical Information Technology

Journal

Journal of medical ethics
ISSN: 1473-4257
Titre abrégé: J Med Ethics
Pays: England
ID NLM: 7513619

Informations de publication

Date de publication:
09 Nov 2023
Historique:
received: 29 08 2023
accepted: 24 10 2023
medline: 10 11 2023
pubmed: 10 11 2023
entrez: 9 11 2023
Statut: aheadofprint

Résumé

Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight ethical vignettes.The main outcomes measured were relevance, reasoning, depth, technical and non-technical clarity, as well as acceptability of GPT-4's responses. The readability of the responses was also assessed. Of the six metrics evaluating the effectiveness of GPT-4's responses, the overall mean score was 4.1/5. GPT-4 was rated highest in providing technical (4.7/5) and non-technical clarity (4.4/5), whereas the lowest rated metrics were depth (3.8/5) and acceptability (3.8/5). There was poor-to-moderate inter-rater reliability characterised by an intraclass coefficient of 0.54 (95% CI: 0.30 to 0.71). Based on panellist feedback, GPT-4 was able to identify and articulate key ethical issues but struggled to appreciate the nuanced aspects of ethical dilemmas and misapplied certain moral principles.This study reveals limitations in the ability of GPT-4 to appreciate the depth and nuanced acceptability of real-world ethical dilemmas, particularly those that require a thorough understanding of relational complexities and context-specific values. Ongoing evaluation of LLM capabilities within medical ethics remains paramount, and further refinement is needed before it can be used effectively in clinical settings.

Identifiants

pubmed: 37945336
pii: jme-2023-109549
doi: 10.1136/jme-2023-109549
pii:
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Informations de copyright

© Author(s) (or their employer(s)) 2023. No commercial re-use. See rights and permissions. Published by BMJ.

Déclaration de conflit d'intérêts

Competing interests: None declared.

Auteurs

Michael Balas (M)

Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada 1michaelbalas@gmail.com.

Jordan Joseph Wadden (JJ)

Centre for Clinical Ethics, Unity Health Toronto, Toronto, Ontario, Canada.
Clinical Ethics, Scarborough Health Network, Scarborough, Ontario, Canada.

Philip C Hébert (PC)

Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
Department of Family and Community Medicine, University of Toronto, Toronto, Ontario, Canada.

Eric Mathison (E)

Philosophy, University of Toronto, Toronto, Ontario, Canada.

Marika D Warren (MD)

Bioethics, Dalhousie University, Halifax, Nova scotia, Canada.

Victoria Seavilleklein (V)

Clinical Ethics Service, Alberta Health Services, Edmonton, Alberta, Canada.

Daniel Wyzynski (D)

Office of Health Ethics, London Health Sciences Centre, London, Ontario, Canada.

Alison Callahan (A)

Ethics Department, Ontario Shores Centre for Mental Health Sciences, Whitby, Ontario, Canada.

Sean A Crawford (SA)

Division of Vascular Surgery, Department of Surgery, University Health Network, Toronto, Ontario, Canada.

Parnian Arjmand (P)

Mississauga Retina Institute, Toronto, Ontario, Canada.

Edsel B Ing (EB)

Ophthalmology, University of Alberta, Edmonton, Alberta, Canada.

Classifications MeSH