Assessing the Readability of Online Patient Education Materials in Obstetrics and Gynecology Using Traditional Measures: Comparative Analysis and Limitations.
assessment
education
education material
gynecology
health education
health literacy
literature
medical documents
obstetrics
obstetrics and gynecology
online content
online education
online patient education materials
readability
tool
utilization
Journal
Journal of medical Internet research
ISSN: 1438-8871
Titre abrégé: J Med Internet Res
Pays: Canada
ID NLM: 100959882
Informations de publication
Date de publication:
30 08 2023
30 08 2023
Historique:
received:
07
02
2023
accepted:
04
07
2023
revised:
06
06
2023
medline:
31
8
2023
pubmed:
30
8
2023
entrez:
30
8
2023
Statut:
epublish
Résumé
Patient education materials (PEMs) can be vital sources of information for the general population. However, despite American Medical Association (AMA) and National Institutes of Health (NIH) recommendations to make PEMs easier to read for patients with low health literacy, they often do not adhere to these recommendations. The readability of online PEMs in the obstetrics and gynecology (OB/GYN) field, in particular, has not been thoroughly investigated. The study sampled online OB/GYN PEMs and aimed to examine (1) agreeability across traditional readability measures (TRMs), (2) adherence of online PEMs to AMA and NIH recommendations, and (3) whether the readability level of online PEMs varied by web-based source and medical topic. This study is not a scoping review, rather, it focused on scoring the readability of OB/GYN PEMs using the traditional measures to add empirical evidence to the literature. A total of 1576 online OB/GYN PEMs were collected via 3 major search engines. In total 93 were excluded due to shorter content (less than 100 words), yielding 1483 PEMs for analysis. Each PEM was scored by 4 TRMs, including Flesch-Kincaid grade level, Gunning fog index, Simple Measure of Gobbledygook, and the Dale-Chall. The PEMs were categorized based on publication source and medical topic by 2 research team members. The readability scores of the categories were compared statistically. Results indicated that the 4 TRMs did not agree with each other, leading to the use of an averaged readability (composite) score for comparison. The composite scores across all online PEMs were not normally distributed and had a median at the 11th grade. Governmental PEMs were the easiest to read amongst source categorizations and PEMs about menstruation were the most difficult to read. However, the differences in the readability scores among the sources and the topics were small. This study found that online OB/GYN PEMs did not meet the AMA and NIH readability recommendations and would be difficult to read and comprehend for patients with low health literacy. Both findings connected well to the literature. This study highlights the need to improve the readability of OB/GYN PEMs to help patients make informed decisions. Research has been done to create more sophisticated readability measures for medical and health documents. Once validated, these tools need to be used by web-based content creators of health education materials.
Sections du résumé
BACKGROUND
Patient education materials (PEMs) can be vital sources of information for the general population. However, despite American Medical Association (AMA) and National Institutes of Health (NIH) recommendations to make PEMs easier to read for patients with low health literacy, they often do not adhere to these recommendations. The readability of online PEMs in the obstetrics and gynecology (OB/GYN) field, in particular, has not been thoroughly investigated.
OBJECTIVE
The study sampled online OB/GYN PEMs and aimed to examine (1) agreeability across traditional readability measures (TRMs), (2) adherence of online PEMs to AMA and NIH recommendations, and (3) whether the readability level of online PEMs varied by web-based source and medical topic. This study is not a scoping review, rather, it focused on scoring the readability of OB/GYN PEMs using the traditional measures to add empirical evidence to the literature.
METHODS
A total of 1576 online OB/GYN PEMs were collected via 3 major search engines. In total 93 were excluded due to shorter content (less than 100 words), yielding 1483 PEMs for analysis. Each PEM was scored by 4 TRMs, including Flesch-Kincaid grade level, Gunning fog index, Simple Measure of Gobbledygook, and the Dale-Chall. The PEMs were categorized based on publication source and medical topic by 2 research team members. The readability scores of the categories were compared statistically.
RESULTS
Results indicated that the 4 TRMs did not agree with each other, leading to the use of an averaged readability (composite) score for comparison. The composite scores across all online PEMs were not normally distributed and had a median at the 11th grade. Governmental PEMs were the easiest to read amongst source categorizations and PEMs about menstruation were the most difficult to read. However, the differences in the readability scores among the sources and the topics were small.
CONCLUSIONS
This study found that online OB/GYN PEMs did not meet the AMA and NIH readability recommendations and would be difficult to read and comprehend for patients with low health literacy. Both findings connected well to the literature. This study highlights the need to improve the readability of OB/GYN PEMs to help patients make informed decisions. Research has been done to create more sophisticated readability measures for medical and health documents. Once validated, these tools need to be used by web-based content creators of health education materials.
Identifiants
pubmed: 37647115
pii: v25i1e46346
doi: 10.2196/46346
pmc: PMC10500363
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Pagination
e46346Informations de copyright
©Anunita Nattam, Tripura Vithala, Tzu-Chun Wu, Shwetha Bindhu, Gregory Bond, Hexuan Liu, Amy Thompson, Danny T Y Wu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 30.08.2023.
Références
Annu Rev Public Health. 2021 Apr 1;42:159-173
pubmed: 33035427
JMIR Med Inform. 2021 Sep 1;9(9):e29175
pubmed: 34468321
JAMA Netw Open. 2022 Dec 1;5(12):e2246051
pubmed: 36508219
AMIA Annu Symp Proc. 2005;:585-9
pubmed: 16779107
BMC Ophthalmol. 2016 Aug 03;16:133
pubmed: 27487960
Am J Health Syst Pharm. 2019 Jan 25;76(3):182-186
pubmed: 31408087
J Vasc Surg. 2022 Dec;76(6):1728-1732
pubmed: 35931399
Obstet Gynecol. 2019 May;133(5):987-993
pubmed: 30969212
Ophthalmic Physiol Opt. 2022 Jul;42(4):839-848
pubmed: 35521818
Cureus. 2019 Mar 6;11(3):e4184
pubmed: 31106084
J Am Med Inform Assoc. 2016 Mar;23(2):269-75
pubmed: 26269536
BMJ. 2010 Oct 14;341:c5146
pubmed: 20947577
JMIR Med Inform. 2018 Mar 23;6(1):e17
pubmed: 29572199
Healthcare (Basel). 2021 Sep 22;9(10):
pubmed: 34682926
J Med Internet Res. 2013 Jul 31;15(7):e144
pubmed: 23903235
JAMA Ophthalmol. 2013 Dec;131(12):1610-6
pubmed: 24178035
Obstet Gynecol. 1999 May;93(5 Pt 1):771-4
pubmed: 10912984
J Med Internet Res. 2022 Jan 11;24(1):e31284
pubmed: 35014955
J Assoc Inf Sci Technol. 2017 Sep;68(9):2088-2100
pubmed: 29057293
Behav Res Methods. 2015 Jun;47(2):340-54
pubmed: 24687843
IT Prof. 2016 May-Jun;18(3):45-51
pubmed: 27698611
Comput Nurs. 1995 Sep-Oct;13(5):221-5
pubmed: 7585304
J Patient Exp. 2021 Mar 3;8:2374373521998847
pubmed: 34179407
Int J Prosthodont. 2021 Feb 26;35(1):62–67
pubmed: 33651024
J Cancer Educ. 2021 Jun;36(3):441-451
pubmed: 32410109
BMJ Open. 2016 Jan 14;6(1):e009627
pubmed: 26769783
J Appl Psychol. 1948 Jun;32(3):221-33
pubmed: 18867058
Br J Clin Pharmacol. 2019 Jul;85(7):1396-1406
pubmed: 30848837
Hypertens Pregnancy. 2015;34(3):383-90
pubmed: 26153628
J Glaucoma. 2022 Jun 1;31(6):438-442
pubmed: 35283441
Ophthalmol Glaucoma. 2022 Sep-Oct;5(5):525-530
pubmed: 35301989
AMIA Annu Symp Proc. 2007 Oct 11;:418-22
pubmed: 18693870
BMC Ophthalmol. 2020 Oct 2;20(1):391
pubmed: 33008367
IEEE J Biomed Health Inform. 2019 Sep;23(5):2164-2173
pubmed: 30530380
Acad Radiol. 2015 Mar;22(3):290-5
pubmed: 25488695
Am J Perinatol. 2016 Nov;33(13):1242-1249
pubmed: 27322666