Feasibility of Virtual Reality Audiological Testing: Prospective Study.

audiology hearing hearing loss real-world performance speech performance virtual reality

Journal

JMIR serious games
ISSN: 2291-9279
Titre abrégé: JMIR Serious Games
Pays: Canada
ID NLM: 101645255

Informations de publication

Date de publication:
31 08 2021
Historique:
received: 06 01 2021
accepted: 29 05 2021
revised: 13 05 2021
entrez: 31 8 2021
pubmed: 1 9 2021
medline: 1 9 2021
Statut: epublish

Résumé

It has been noted in the literature that there is a gap between clinical assessment and real-world performance. Real-world conversations entail visual and audio information, yet there are not any audiological assessment tools that include visual information. Virtual reality (VR) technology has been applied to various areas, including audiology. However, the use of VR in speech-in-noise perception has not yet been investigated. The purpose of this study was to investigate the impact of virtual space (VS) on speech performance and its feasibility to be used as a speech test instrument. We hypothesized that individuals' ability to recognize speech would improve when visual cues were provided. A total of 30 individuals with normal hearing and 25 individuals with hearing loss completed pure-tone audiometry and the Korean version of the Hearing in Noise Test (K-HINT) under three conditions-conventional K-HINT (cK-HINT), VS on PC (VSPC), and VS head-mounted display (VSHMD)-at -10 dB, -5 dB, 0 dB, and +5 dB signal-to-noise ratios (SNRs). Participants listened to target speech and repeated it back to the tester for all conditions. Hearing aid users in the hearing loss group completed testing under unaided and aided conditions. A questionnaire was administered after testing to gather subjective opinions on the headset, the VSHMD condition, and test preference. Provision of visual information had a significant impact on speech performance between the normal hearing and hearing impaired groups. The Mann-Whitney U test showed statistical significance (P<.05) between the two groups under all test conditions. Hearing aid use led to better integration of audio and visual cues. Statistical significance through the Mann-Whitney U test was observed for -5 dB (P=.04) and 0 dB (P=.02) SNRs under the cK-HINT condition, as well as for -10 dB (P=.007) and 0 dB (P=.04) SNRs under the VSPC condition, between hearing aid and non-hearing aid users. Participants reported positive responses across almost all items on the questionnaire except for the weight of the headset. Participants preferred a test method with visual imagery, but found the headset to be heavy. Findings are in line with previous literature that showed that visual cues were beneficial for communication. This is the first study to include hearing aid users with a more naturalistic stimulus and a relatively simple test environment, suggesting the feasibility of VR audiological testing in clinical practice.

Sections du résumé

BACKGROUND
It has been noted in the literature that there is a gap between clinical assessment and real-world performance. Real-world conversations entail visual and audio information, yet there are not any audiological assessment tools that include visual information. Virtual reality (VR) technology has been applied to various areas, including audiology. However, the use of VR in speech-in-noise perception has not yet been investigated.
OBJECTIVE
The purpose of this study was to investigate the impact of virtual space (VS) on speech performance and its feasibility to be used as a speech test instrument. We hypothesized that individuals' ability to recognize speech would improve when visual cues were provided.
METHODS
A total of 30 individuals with normal hearing and 25 individuals with hearing loss completed pure-tone audiometry and the Korean version of the Hearing in Noise Test (K-HINT) under three conditions-conventional K-HINT (cK-HINT), VS on PC (VSPC), and VS head-mounted display (VSHMD)-at -10 dB, -5 dB, 0 dB, and +5 dB signal-to-noise ratios (SNRs). Participants listened to target speech and repeated it back to the tester for all conditions. Hearing aid users in the hearing loss group completed testing under unaided and aided conditions. A questionnaire was administered after testing to gather subjective opinions on the headset, the VSHMD condition, and test preference.
RESULTS
Provision of visual information had a significant impact on speech performance between the normal hearing and hearing impaired groups. The Mann-Whitney U test showed statistical significance (P<.05) between the two groups under all test conditions. Hearing aid use led to better integration of audio and visual cues. Statistical significance through the Mann-Whitney U test was observed for -5 dB (P=.04) and 0 dB (P=.02) SNRs under the cK-HINT condition, as well as for -10 dB (P=.007) and 0 dB (P=.04) SNRs under the VSPC condition, between hearing aid and non-hearing aid users. Participants reported positive responses across almost all items on the questionnaire except for the weight of the headset. Participants preferred a test method with visual imagery, but found the headset to be heavy.
CONCLUSIONS
Findings are in line with previous literature that showed that visual cues were beneficial for communication. This is the first study to include hearing aid users with a more naturalistic stimulus and a relatively simple test environment, suggesting the feasibility of VR audiological testing in clinical practice.

Identifiants

pubmed: 34463624
pii: v9i3e26976
doi: 10.2196/26976
pmc: PMC8441603
doi:

Types de publication

Journal Article

Langues

eng

Pagination

e26976

Commentaires et corrections

Type : ErratumIn

Informations de copyright

©Hye Yoon Seol, Soojin Kang, Jihyun Lim, Sung Hwa Hong, Il Joon Moon. Originally published in JMIR Serious Games (https://games.jmir.org), 31.08.2021.

Références

J Am Acad Audiol. 2010 Feb;21(2):78-89
pubmed: 20166310
Hear Res. 2016 Mar;333:127-135
pubmed: 26773752
Trends Amplif. 2011 Sep;15(3):116-26
pubmed: 22072599
Phonetica. 1979;36(4-5):314-31
pubmed: 523520
PLoS One. 2019 Mar 29;14(3):e0214603
pubmed: 30925174
Ear Hear. 2001 Jun;22(3):236-51
pubmed: 11409859
J Speech Hear Res. 1993 Aug;36(4):808-19
pubmed: 8377493
Philos Trans R Soc Lond B Biol Sci. 1992 Jan 29;335(1273):71-8
pubmed: 1348140
J Acoust Soc Am. 2005 Feb;117(2):842-9
pubmed: 15759704
J Acoust Soc Am. 2000 Sep;108(3 Pt 1):1197-208
pubmed: 11008820
Lancet. 2017 Dec 2;390(10111):2503-2515
pubmed: 28705460
Ear Hear. 2015 Jan;36(1):136-44
pubmed: 25170780
Neural Regen Res. 2014 Apr 15;9(8):888-96
pubmed: 25206907
Sci Rep. 2019 Dec 4;9(1):18284
pubmed: 31798004
Int J Audiol. 2008 Jun;47(6):375-6
pubmed: 18569115
J Am Acad Audiol. 2004 May;15(5):353-64
pubmed: 15506497
Am J Audiol. 2020 Jun 8;29(2):244-258
pubmed: 32250641
Nature. 1976 Dec 23-30;264(5588):746-8
pubmed: 1012311
Gerontologist. 2016 Apr;56 Suppl 2:S256-67
pubmed: 26994265
J Speech Hear Res. 1993 Aug;36(4):820-31
pubmed: 8377494
Ear Hear. 2002 Oct;23(5):439-49
pubmed: 12411777
Pain Manag. 2011 Mar;1(2):147-157
pubmed: 21779307
S Afr Med J. 2015 Jan;105(1):35-9
pubmed: 26046161
Int J Audiol. 2015;54(10):682-90
pubmed: 25853616
Trends Amplif. 2007 Jun;11(2):63-71
pubmed: 17494873
Hear J. 2019 Jun;72(6):20-23
pubmed: 34113058
Laryngoscope. 2017 Apr;127(4):927-931
pubmed: 27328455
Trends Hear. 2018 Jan-Dec;22:2331216518782839
pubmed: 29956591
Cochrane Database Syst Rev. 2017 Nov 20;11:CD008349
pubmed: 29156493
Braz J Otorhinolaryngol. 2010 Jan-Feb;76(1):14-7
pubmed: 20339683
Am J Audiol. 2006 Jun;15(1):81-91
pubmed: 16803795
Clin Linguist Phon. 2016;30(7):531-45
pubmed: 27029217
N Engl J Med. 2017 Dec 21;377(25):2465-2473
pubmed: 29262274
J Assoc Res Otolaryngol. 2007 Jun;8(2):294-304
pubmed: 17453308
Folia Phoniatr Logop. 2016;68(1):16-21
pubmed: 27362521
J Biomed Inform. 2010 Feb;43(1):159-72
pubmed: 19615467
Sci Rep. 2017 Jun 19;7(1):3817
pubmed: 28630450
PLoS One. 2015 Mar 02;10(3):e0114922
pubmed: 25730423
J Am Acad Audiol. 2000 Nov-Dec;11(10):540-60
pubmed: 11198072
Front Hum Neurosci. 2015 Aug 03;9:422
pubmed: 26283946

Auteurs

Hye Yoon Seol (HY)

Medical Research Institute, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea.
Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea.

Soojin Kang (S)

Medical Research Institute, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea.
Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea.

Jihyun Lim (J)

Center for Clinical Epidemiology, Samsung Medical Center, Seoul, Republic of Korea.

Sung Hwa Hong (SH)

Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea.
Department of Otolaryngology-Head & Neck Surgery, Samsung Changwon Hospital, Changwon, Republic of Korea.

Il Joon Moon (IJ)

Hearing Research Laboratory, Samsung Medical Center, Seoul, Republic of Korea.
Department of Otolaryngology-Head & Neck Surgery, Sungkyunkwan University School of Medicine, Samsung Medical Center, Seoul, Republic of Korea.

Classifications MeSH