The Use of Artificial Intelligence to Improve Readability of Otolaryngology Patient Education Materials.
PS/QI
artificial intelligence
patient education
readability
Journal
Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery
ISSN: 1097-6817
Titre abrégé: Otolaryngol Head Neck Surg
Pays: England
ID NLM: 8508176
Informations de publication
Date de publication:
15 May 2024
15 May 2024
Historique:
revised:
14
02
2024
received:
29
11
2023
accepted:
24
02
2024
medline:
16
5
2024
pubmed:
16
5
2024
entrez:
16
5
2024
Statut:
aheadofprint
Résumé
The recommended readability of health education materials is at the sixth-grade level. Artificial intelligence (AI) large language models such as the newly released ChatGPT4 might facilitate the conversion of patient-education materials at scale. We sought to ascertain whether online otolaryngology education materials meet recommended reading levels and whether ChatGPT4 could rewrite these materials to the sixth-grade level. We also wished to ensure that converted materials were accurate and retained sufficient content. Seventy-one articles from patient educational materials published online by the American Academy of Otolaryngology-Head and Neck Surgery were selected. Articles were entered into ChatGPT4 with the prompt "translate this text to a sixth-grade reading level." Flesch Reading Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL) were determined for each article before and after AI conversion. Each article and conversion were reviewed for factual inaccuracies, and each conversion was reviewed for content retention. The 71 articles had an initial average FKGL of 11.03 and FRES of 46.79. After conversion by ChatGPT4, the average FKGL across all articles was 5.80 and FRES was 77.27. Converted materials provided enough detail for patient education with no factual errors. We found that ChatGPT4 improved the reading accessibility of otolaryngology online patient education materials to recommended levels quickly and effectively. Physicians can determine whether their patient education materials exceed current recommended reading levels by using widely available measurement tools, and then apply AI dialogue platforms to modify materials to more accessible levels as needed. Level 5.
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Informations de copyright
© 2024 American Academy of Otolaryngology–Head and Neck Surgery Foundation.
Références
Davis TC, Wolf MS. Health literacy: implications for family medicine. Fam Med. 2004;36:595‐598.
Greenberg E, White S. National Assessment of Adult Literacy: Public‐Use Data File User's Guide. National Center for Education Statistics; 2003.
Weis BD. Health Literacy: A Manual for Clinicians. American Medical Association, American Medical Foundation; 2003.
Greywoode J, Bluman E, Spiegel J, Boon M. Readability analysis of patient information on the American Academy of Otolaryngology–Head and Neck Surgery website. Otolaryngol Head Neck Surg. 2009;141(5):555‐558. doi:10.1016/j.otohns.2009.08.004
Baker DW, Parker RM, Williams MV, Clark WS, Nurss J. The relationship of patient reading ability to self‐reported health and use of health services. Am J Public Health. 1997;87(6):1027‐1030. doi:10.2105/ajph.87.6.1027
Baker DW, Gazmararian JA, Williams MV, et al. Functional health literacy and the risk of hospital admission among Medicare managed care enrollees. Am J Public Health. 2002;92(8):1278‐1283. doi:10.2105/ajph.92.8.1278
Centers for Disease Control and Prevention. Simply Put: A Guide for Creating Easy‐to‐Understand Materials. Centers for Disease Control and Prevention; 2010.
Gal I, Prigat A. Why organizations continue to create patient information leaflets with readability and usability problems: an exploratory study. Health Educ Res. 2005;20(4):485‐493. doi:10.1093/her/cyh009
Dickinson D, Raynor DK, Duman M. Patient information leaflets for medicines: using consumer testing to determine the most effective design. Patient Educ Couns. 2001;43(2):147‐159. doi:10.1016/s0738-3991(00)00156-7
Griffith E, Metz C. A New Area of A.I. Booms, Even Amid the Tech Gloom. The New York Times; 2023.
McMullan M. Patients using the Internet to obtain health information: how this affects the patient‐health professional relationship. Patient Educ Couns. 2006;63(1‐2):24‐28. doi:10.1016/j.pec.2005.10.006.
Tan SS, Goonawardene N. Internet health information seeking and the patient‐physician relationship: a systematic review. J Med Internet Res. 2017;19(1):e9. doi:10.2196/jmir.5729
Wang LW, Miller MJ, Schmitt MR, Wen FK. Assessing readability formula differences with written health information materials: application, results, and recommendations. Res Soc Adm Pharm. 2013;9(5):503‐516. doi:10.1016/j.sapharm.2012.05.009
Carnegie Mellon University. CMU Pronouncing Dictionary. 2023. Accessed December 27, 2023. Database https://svn.code.sf.net/p/cmusphinx/code/trunk/cmudict/cmudict-0.7b
Kim JH, Grose E, Philteos J, et al. Readability of the American, Canadian, and British Otolaryngology‐Head and Neck Surgery societies' patient materials. Otolaryngol Head Neck Surg. 2022;166(5):862‐868. doi:10.1177/01945998211033254
Wong K, Levi JR. Readability trends of online information by the American Academy of Otolaryngology–Head and Neck Surgery Foundation. Otolaryngol Head Neck Surg. 2017;156:96‐102. doi:10.1177/0194599816674711
OpenAI. Data from: ChatGPT, GPT‐4. 2023. Accessed June 30, 2023. https://chat.openai.com/
Flesch R. How to Write Plain English: A Book for Lawyers and Consumers. Vol xiii, 1st ed. Harper & Row; 1979:126.
Simani L, Oron Y, Handzel O, et al. Evaluation of the quality of online information on sudden sensorineural hearing loss. Otol Neurotol. 2022;43(2):159‐164. doi:10.1097/MAO.0000000000003424.
National Institutes of Health (NIH). Clear Communication: Clear and Simple. NIH; 2021.
Zeatoun A, Makutonin M, Farquhar D, et al. Relationship between health literacy and disease‐specific quality of life in patients with sinonasal disease. Int Forum Allergy Rhinol. 2023. 13(3);277‐280. doi:10.1002/alr.23082
Liebrenz M, Schleifer R, Buadze A, Bhugra D, Smith A. Generating scholarly content with ChatGPT: ethical challenges for medical publishing. Lancet Digit Health. 2023;5(3):e105‐e106. doi:10.1016/S2589-7500(23)00019-5
Frye BL. Should using an AI text generator to produce academic writing be plagiarism? Fordham Intellect Property Media Entertainment Law J. 2023;33:5.
Roose K. An A.I.‐Generated Picture Won an Art Prize. Artists Aren't Happy. New York Times. 2022.
Haver HL, Lin CT, Sirajuddin A, Yi PH, Jeudy J. Use of ChatGPT, GPT‐4, and Bard to improve readability of ChatGPT's answers to common questions about lung cancer and lung cancer screening. Am J Roentgenol. 2023;221(5):701‐704.
Mondal H, Mondal S, Podder I. Using ChatGPT for writing articles for patients' education for dermatological diseases: a pilot study. Indian Dermatol Online J. 2023;14(4):482‐486. doi:10.4103/idoj.idoj_72_23