Breaking barriers: can ChatGPT compete with a shoulder and elbow specialist in diagnosis and management?
Artificial intelligence
ChatGPT
Diagnosis and management
Empathy
Orthopedics
Shoulder and elbow surgery
Journal
JSES international
ISSN: 2666-6383
Titre abrégé: JSES Int
Pays: United States
ID NLM: 101763461
Informations de publication
Date de publication:
Nov 2023
Nov 2023
Historique:
medline:
16
11
2023
pubmed:
16
11
2023
entrez:
16
11
2023
Statut:
epublish
Résumé
ChatGPT is an artificial intelligence (AI) language processing model that uses deep learning to generate human-like responses to natural language inputs. Its potential use in health care has raised questions and several studies have assessed its effectiveness in writing articles, clinical reasoning, and solving complex questions. This study aims to investigate ChatGPT's capabilities and implications in diagnosing and managing patients with new shoulder and elbow complaints in a private clinical setting to provide insights into its potential use as a diagnostic tool for patients and a first consultation resource for primary physicians. In a private clinical setting, patients were assessed by ChatGPT after being seen by a shoulder and elbow specialist for shoulder and elbow symptoms. To be assessed by the AI model, a research fellow filled out a standardized form (including age, gender, major comorbidities, symptoms and the localization, natural history, and duration, any associated symptoms or movement deficit, aggravating/relieving factors, and x-ray/imaging report if present). This form was submitted through the ChatGPT portal and the AI model was asked for a diagnosis and best management modality. A total of 29 patients with 15 males and 14 females, were included in this study. The AI model was able to correctly choose the diagnosis and management in 93% (27/29) and 83% (24/29) of the patients, respectively. Furthermore, of the remaining 24 patients that were managed correctly, ChatGPT did not specify the appropriate management in 6 patients and chose only one management in 5 patients, where both were applicable and dependent on the patient's choice. Therefore, 55% of ChatGPT's management was poor. ChatGPT made a worthy opponent; however, it will not be able to replace in its current form a shoulder and elbow specialist in diagnosing and treating patients for many reasons such as misdiagnosis, poor management, lack of empathy and interactions with patients, its dependence on magnetic resonance imaging reports, and its lack of new knowledge.
Sections du résumé
Background
UNASSIGNED
ChatGPT is an artificial intelligence (AI) language processing model that uses deep learning to generate human-like responses to natural language inputs. Its potential use in health care has raised questions and several studies have assessed its effectiveness in writing articles, clinical reasoning, and solving complex questions. This study aims to investigate ChatGPT's capabilities and implications in diagnosing and managing patients with new shoulder and elbow complaints in a private clinical setting to provide insights into its potential use as a diagnostic tool for patients and a first consultation resource for primary physicians.
Methods
UNASSIGNED
In a private clinical setting, patients were assessed by ChatGPT after being seen by a shoulder and elbow specialist for shoulder and elbow symptoms. To be assessed by the AI model, a research fellow filled out a standardized form (including age, gender, major comorbidities, symptoms and the localization, natural history, and duration, any associated symptoms or movement deficit, aggravating/relieving factors, and x-ray/imaging report if present). This form was submitted through the ChatGPT portal and the AI model was asked for a diagnosis and best management modality.
Results
UNASSIGNED
A total of 29 patients with 15 males and 14 females, were included in this study. The AI model was able to correctly choose the diagnosis and management in 93% (27/29) and 83% (24/29) of the patients, respectively. Furthermore, of the remaining 24 patients that were managed correctly, ChatGPT did not specify the appropriate management in 6 patients and chose only one management in 5 patients, where both were applicable and dependent on the patient's choice. Therefore, 55% of ChatGPT's management was poor.
Conclusion
UNASSIGNED
ChatGPT made a worthy opponent; however, it will not be able to replace in its current form a shoulder and elbow specialist in diagnosing and treating patients for many reasons such as misdiagnosis, poor management, lack of empathy and interactions with patients, its dependence on magnetic resonance imaging reports, and its lack of new knowledge.
Identifiants
pubmed: 37969495
doi: 10.1016/j.jseint.2023.07.018
pii: S2666-6383(23)00212-8
pmc: PMC10638599
doi:
Types de publication
Journal Article
Langues
eng
Pagination
2534-2541Informations de copyright
© 2023 The Authors.
Références
Cureus. 2023 Feb 20;15(2):e35237
pubmed: 36968864
Resuscitation. 2023 Apr;185:109729
pubmed: 36773836
J Am Coll Radiol. 2023 Oct;20(10):990-997
pubmed: 37356806
BMC Musculoskelet Disord. 2022 Jun 18;23(1):588
pubmed: 35717178
Microb Biotechnol. 2023 Jun;16(6):1131-1173
pubmed: 36786388
Lancet Digit Health. 2023 Mar;5(3):e107-e108
pubmed: 36754724
JMIR Med Educ. 2023 Feb 8;9:e45312
pubmed: 36753318
PLOS Digit Health. 2023 Feb 9;2(2):e0000198
pubmed: 36812645
JMIR Med Inform. 2022 Feb 10;10(2):e32875
pubmed: 35142635
Nature. 2023 Jan;613(7944):423
pubmed: 36635510
JSES Rev Rep Tech. 2023 Feb 05;3(3):274-278
pubmed: 37588507
Eur J Hum Genet. 2023 May 29;:
pubmed: 37246194
Lancet Digit Health. 2023 Mar;5(3):e105-e106
pubmed: 36754725
Med. 2023 Mar 10;4(3):139-140
pubmed: 36905924
Healthcare (Basel). 2023 Mar 19;11(6):
pubmed: 36981544
Neurosurgery. 2023 Apr 1;92(4):663-664
pubmed: 36757199
JACC Basic Transl Sci. 2023 Jan 18;8(2):221-223
pubmed: 36908674
Clin Mol Hepatol. 2023 Jul;29(3):721-732
pubmed: 36946005
N Biotechnol. 2023 May 25;74:16-24
pubmed: 36754147
Radiology. 2023 Apr;307(2):e223312
pubmed: 36728748
J Shoulder Elbow Surg. 2023 Aug;32(8):1701-1709
pubmed: 36690172
Int J Environ Res Public Health. 2023 Feb 15;20(4):
pubmed: 36834073
Lancet Digit Health. 2023 Apr;5(4):e179-e181
pubmed: 36894409
J Educ Eval Health Prof. 2023;20:1
pubmed: 36627845