A future role for health applications of large language models depends on regulators enforcing safety standards.
Journal
The Lancet. Digital health
ISSN: 2589-7500
Titre abrégé: Lancet Digit Health
Pays: England
ID NLM: 101751302
Informations de publication
Date de publication:
Sep 2024
Sep 2024
Historique:
received:
14
02
2024
revised:
17
05
2024
accepted:
06
06
2024
medline:
24
8
2024
pubmed:
24
8
2024
entrez:
23
8
2024
Statut:
ppublish
Résumé
Among the rapid integration of artificial intelligence in clinical settings, large language models (LLMs), such as Generative Pre-trained Transformer-4, have emerged as multifaceted tools that have potential for health-care delivery, diagnosis, and patient care. However, deployment of LLMs raises substantial regulatory and safety concerns. Due to their high output variability, poor inherent explainability, and the risk of so-called AI hallucinations, LLM-based health-care applications that serve a medical purpose face regulatory challenges for approval as medical devices under US and EU laws, including the recently passed EU Artificial Intelligence Act. Despite unaddressed risks for patients, including misdiagnosis and unverified medical advice, such applications are available on the market. The regulatory ambiguity surrounding these tools creates an urgent need for frameworks that accommodate their unique capabilities and limitations. Alongside the development of these frameworks, existing regulations should be enforced. If regulators fear enforcing the regulations in a market dominated by supply or development by large technology companies, the consequences of layperson harm will force belated action, damaging the potentiality of LLM-based applications for layperson medical advice.
Identifiants
pubmed: 39179311
pii: S2589-7500(24)00124-9
doi: 10.1016/S2589-7500(24)00124-9
pii:
doi:
Types de publication
Journal Article
Review
Langues
eng
Sous-ensembles de citation
IM
Pagination
e662-e672Informations de copyright
Copyright © 2024 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.
Déclaration de conflit d'intérêts
Declaration of interests SG is an advisory group member of the EY-coordinated study on regulatory governance and innovation in the field of medical devices conducted on behalf of the Directorate General for Health and Food Safety of the European Commission. SG has or has had consulting relationships with Una Health, Lindus Health, Flo, Thymia, FORUM Institut für Management, High-Tech Gründerfonds Management, and Ada Health, and holds share options in Ada Health. JNK received funding from GlaxoSmithKline; has or has had consulting relationships with Owkin, DoMore Diagnostics, Histofy, and Panakeia; received honoraria for lectures and events from Bayer, Eisai, Merck Sharp & Dohme, Bristol Myers Squibb, Roche, Pfizer, and Fresenius; participated on boards of Bayer, Eisai, Merck Sharp & Dohme, Bristol Myers Squibb, Roche, and Pfizer; and has a leadership role and holds stock in StratifAI. OF has worked as a freelance doctor for Clinical Research Services Berlin, and has a leadership role and holds stock in WhalesDontFly. ICW declares no competing interests.