Comparison of clinical note quality between an automated digital intake tool and the standard note in the emergency department.


Journal

The American journal of emergency medicine
ISSN: 1532-8171
Titre abrégé: Am J Emerg Med
Pays: United States
ID NLM: 8309942

Informations de publication

Date de publication:
01 2023
Historique:
received: 16 07 2022
revised: 05 09 2022
accepted: 07 10 2022
pubmed: 4 11 2022
medline: 15 12 2022
entrez: 3 11 2022
Statut: ppublish

Résumé

Medical encounters require an efficient and focused history of present illness (HPI) to create differential diagnoses and guide diagnostic testing and treatment. Our aim was to compare the HPI of notes created by an automated digital intake tool versus standard medical notes created by clinicians. Prospective trial in a quaternary academic Emergency Department (ED). Notes were compared using the 5-point Physician Documentation Quality Instrument (PDQI-9) scale and the Centers for Medicare & Medicaid Services (CMS) level of complexity index. Reviewers were board certified emergency medicine physicians blinded to note origin. Reviewers received training and calibration prior to note assessments. A difference of 1 point was considered clinically significant. Analysis included McNemar's (binary), Wilcoxon-rank (Likert), and agreement with Cohen's Kappa. A total of 148 ED medical encounters were charted by both digital note and standard clinical note. The ability to capture patient information was assessed through comparison of note content across paired charts (digital-standard note on the same patient), as well as scores given by the reviewers. Reviewer agreement was kappa 0.56 (CI 0.49-0.64), indicating moderate level of agreement between reviewers scoring the same patient chart. Considering all 18 questions across PDQI-9 and CMS scales, the average agreement between standard clinical note and digital note was 54.3% (IQR 44.4-66.7%). There was a moderate level of agreement between content of standard and digital notes (kappa 0.54, 95%CI 0.49-0.60). The quality of the digital note was within the 1 point clinically significant difference for all of the attributes, except for conciseness. Digital notes had a higher frequency of CMS severity elements identified. Digitally generated clinical notes had moderate agreement compared to standard clinical notes and within the one point clinically significant difference except for the conciseness attribute. Digital notes more reliably documented billing components of severity. The use of automated notes should be further explored to evaluate its utility in facilitating documentation of patient encounters.

Sections du résumé

BACKGROUND
Medical encounters require an efficient and focused history of present illness (HPI) to create differential diagnoses and guide diagnostic testing and treatment. Our aim was to compare the HPI of notes created by an automated digital intake tool versus standard medical notes created by clinicians.
METHODS
Prospective trial in a quaternary academic Emergency Department (ED). Notes were compared using the 5-point Physician Documentation Quality Instrument (PDQI-9) scale and the Centers for Medicare & Medicaid Services (CMS) level of complexity index. Reviewers were board certified emergency medicine physicians blinded to note origin. Reviewers received training and calibration prior to note assessments. A difference of 1 point was considered clinically significant. Analysis included McNemar's (binary), Wilcoxon-rank (Likert), and agreement with Cohen's Kappa.
RESULTS
A total of 148 ED medical encounters were charted by both digital note and standard clinical note. The ability to capture patient information was assessed through comparison of note content across paired charts (digital-standard note on the same patient), as well as scores given by the reviewers. Reviewer agreement was kappa 0.56 (CI 0.49-0.64), indicating moderate level of agreement between reviewers scoring the same patient chart. Considering all 18 questions across PDQI-9 and CMS scales, the average agreement between standard clinical note and digital note was 54.3% (IQR 44.4-66.7%). There was a moderate level of agreement between content of standard and digital notes (kappa 0.54, 95%CI 0.49-0.60). The quality of the digital note was within the 1 point clinically significant difference for all of the attributes, except for conciseness. Digital notes had a higher frequency of CMS severity elements identified.
CONCLUSION
Digitally generated clinical notes had moderate agreement compared to standard clinical notes and within the one point clinically significant difference except for the conciseness attribute. Digital notes more reliably documented billing components of severity. The use of automated notes should be further explored to evaluate its utility in facilitating documentation of patient encounters.

Identifiants

pubmed: 36327754
pii: S0735-6757(22)00639-8
doi: 10.1016/j.ajem.2022.10.009
pii:
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

79-85

Informations de copyright

Copyright © 2022 Elsevier Inc. All rights reserved.

Déclaration de conflit d'intérêts

Declaration of Competing Interest This study was funded by a sponsored research collaboration between Diagnostic Robotics and Mayo Clinic. Data collection and analysis were done independently by study coordinators and a biostatistician at Mayo Clinic. Methodological strategies were used to mitigate the potential conflict of interest of some of the authors. None of the reviewers or the statistician have potential conflict of interests to disclose. The contents of this manuscript are solely the responsibility of the authors and do not necessarily represent the official view of Diagnostic Robotics. Daniel Cabrera, MD has intellectual property associated to the product described in this manuscript. Fernanda Bellolio, MD has intellectual property associated to the product described in this manuscript. Andy Boggust, MD has intellectual property associated to the product described in this manuscript. Ron Eshel, MD was an employee of Diagnostic Robotics. Nathan Shapiro, MD is a paid consultant and small shareholder of Diagnostic Robotics.

Auteurs

Ron Eshel (R)

Department of Anesthesia, Critical Care and Pain, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.

Fernanda Bellolio (F)

Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States.

Andy Boggust (A)

Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States.

Nathan I Shapiro (NI)

Department of Emergency Medicine, Beth Israel Deaconess Medical Center, Boston, MA, United States; Diagnostics Robotics. Tel Aviv, Israel.

Aidan F Mullan (AF)

Department of Health Sciences Research, Division of Health Care Policy and Research, Mayo Clinic, Rochester, MN, United States.

Heather A Heaton (HA)

Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States.

Bo E Madsen (BE)

Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States.

James L Homme (JL)

Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States.

Benjamin W Iliff (BW)

Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States.

Kharmene L Sunga (KL)

Department of Anesthesia, Critical Care and Pain, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.

Cameron R Wangsgard (CR)

Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States.

Derek Vanmeter (D)

Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States.

Daniel Cabrera (D)

Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States. Electronic address: Cabrera.daniel@mayo.edu.

Articles similaires

[Redispensing of expensive oral anticancer medicines: a practical application].

Lisanne N van Merendonk, Kübra Akgöl, Bastiaan Nuijen
1.00
Humans Antineoplastic Agents Administration, Oral Drug Costs Counterfeit Drugs

Smoking Cessation and Incident Cardiovascular Disease.

Jun Hwan Cho, Seung Yong Shin, Hoseob Kim et al.
1.00
Humans Male Smoking Cessation Cardiovascular Diseases Female
Humans United States Aged Cross-Sectional Studies Medicare Part C
1.00
Humans Yoga Low Back Pain Female Male

Classifications MeSH