Comparison of clinical note quality between an automated digital intake tool and the standard note in the emergency department.
Journal
The American journal of emergency medicine
ISSN: 1532-8171
Titre abrégé: Am J Emerg Med
Pays: United States
ID NLM: 8309942
Informations de publication
Date de publication:
01 2023
01 2023
Historique:
received:
16
07
2022
revised:
05
09
2022
accepted:
07
10
2022
pubmed:
4
11
2022
medline:
15
12
2022
entrez:
3
11
2022
Statut:
ppublish
Résumé
Medical encounters require an efficient and focused history of present illness (HPI) to create differential diagnoses and guide diagnostic testing and treatment. Our aim was to compare the HPI of notes created by an automated digital intake tool versus standard medical notes created by clinicians. Prospective trial in a quaternary academic Emergency Department (ED). Notes were compared using the 5-point Physician Documentation Quality Instrument (PDQI-9) scale and the Centers for Medicare & Medicaid Services (CMS) level of complexity index. Reviewers were board certified emergency medicine physicians blinded to note origin. Reviewers received training and calibration prior to note assessments. A difference of 1 point was considered clinically significant. Analysis included McNemar's (binary), Wilcoxon-rank (Likert), and agreement with Cohen's Kappa. A total of 148 ED medical encounters were charted by both digital note and standard clinical note. The ability to capture patient information was assessed through comparison of note content across paired charts (digital-standard note on the same patient), as well as scores given by the reviewers. Reviewer agreement was kappa 0.56 (CI 0.49-0.64), indicating moderate level of agreement between reviewers scoring the same patient chart. Considering all 18 questions across PDQI-9 and CMS scales, the average agreement between standard clinical note and digital note was 54.3% (IQR 44.4-66.7%). There was a moderate level of agreement between content of standard and digital notes (kappa 0.54, 95%CI 0.49-0.60). The quality of the digital note was within the 1 point clinically significant difference for all of the attributes, except for conciseness. Digital notes had a higher frequency of CMS severity elements identified. Digitally generated clinical notes had moderate agreement compared to standard clinical notes and within the one point clinically significant difference except for the conciseness attribute. Digital notes more reliably documented billing components of severity. The use of automated notes should be further explored to evaluate its utility in facilitating documentation of patient encounters.
Sections du résumé
BACKGROUND
Medical encounters require an efficient and focused history of present illness (HPI) to create differential diagnoses and guide diagnostic testing and treatment. Our aim was to compare the HPI of notes created by an automated digital intake tool versus standard medical notes created by clinicians.
METHODS
Prospective trial in a quaternary academic Emergency Department (ED). Notes were compared using the 5-point Physician Documentation Quality Instrument (PDQI-9) scale and the Centers for Medicare & Medicaid Services (CMS) level of complexity index. Reviewers were board certified emergency medicine physicians blinded to note origin. Reviewers received training and calibration prior to note assessments. A difference of 1 point was considered clinically significant. Analysis included McNemar's (binary), Wilcoxon-rank (Likert), and agreement with Cohen's Kappa.
RESULTS
A total of 148 ED medical encounters were charted by both digital note and standard clinical note. The ability to capture patient information was assessed through comparison of note content across paired charts (digital-standard note on the same patient), as well as scores given by the reviewers. Reviewer agreement was kappa 0.56 (CI 0.49-0.64), indicating moderate level of agreement between reviewers scoring the same patient chart. Considering all 18 questions across PDQI-9 and CMS scales, the average agreement between standard clinical note and digital note was 54.3% (IQR 44.4-66.7%). There was a moderate level of agreement between content of standard and digital notes (kappa 0.54, 95%CI 0.49-0.60). The quality of the digital note was within the 1 point clinically significant difference for all of the attributes, except for conciseness. Digital notes had a higher frequency of CMS severity elements identified.
CONCLUSION
Digitally generated clinical notes had moderate agreement compared to standard clinical notes and within the one point clinically significant difference except for the conciseness attribute. Digital notes more reliably documented billing components of severity. The use of automated notes should be further explored to evaluate its utility in facilitating documentation of patient encounters.
Identifiants
pubmed: 36327754
pii: S0735-6757(22)00639-8
doi: 10.1016/j.ajem.2022.10.009
pii:
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Pagination
79-85Informations de copyright
Copyright © 2022 Elsevier Inc. All rights reserved.
Déclaration de conflit d'intérêts
Declaration of Competing Interest This study was funded by a sponsored research collaboration between Diagnostic Robotics and Mayo Clinic. Data collection and analysis were done independently by study coordinators and a biostatistician at Mayo Clinic. Methodological strategies were used to mitigate the potential conflict of interest of some of the authors. None of the reviewers or the statistician have potential conflict of interests to disclose. The contents of this manuscript are solely the responsibility of the authors and do not necessarily represent the official view of Diagnostic Robotics. Daniel Cabrera, MD has intellectual property associated to the product described in this manuscript. Fernanda Bellolio, MD has intellectual property associated to the product described in this manuscript. Andy Boggust, MD has intellectual property associated to the product described in this manuscript. Ron Eshel, MD was an employee of Diagnostic Robotics. Nathan Shapiro, MD is a paid consultant and small shareholder of Diagnostic Robotics.