Luminance effects on pupil dilation in speech-in-noise recognition.


Journal

PloS one
ISSN: 1932-6203
Titre abrégé: PLoS One
Pays: United States
ID NLM: 101285081

Informations de publication

Date de publication:
2022
Historique:
received: 06 04 2022
accepted: 17 11 2022
entrez: 2 12 2022
pubmed: 3 12 2022
medline: 7 12 2022
Statut: epublish

Résumé

There is an increasing interest in the field of audiology and speech communication to measure the effort that it takes to listen in noisy environments, with obvious implications for populations suffering from hearing loss. Pupillometry offers one avenue to make progress in this enterprise but important methodological questions remain to be addressed before such tools can serve practical applications. Typically, cocktail-party situations may occur in less-than-ideal lighting conditions, e.g. a pub or a restaurant, and it is unclear how robust pupil dynamics are to luminance changes. In this study, we first used a well-known paradigm where sentences were presented at different signal-to-noise ratios (SNR), all conducive of good intelligibility. This enabled us to replicate findings, e.g. a larger and later peak pupil dilation (PPD) at adverse SNR, or when the sentences were misunderstood, and to investigate the dependency of the PPD on sentence duration. A second experiment reiterated two of the SNR levels, 0 and +14 dB, but measured at 0, 75, and 220 lux. The results showed that the impact of luminance on the SNR effect was non-monotonic (sub-optimal in darkness or in bright light), and as such, there is no trivial way to derive pupillary metrics that are robust to differences in background light, posing considerable constraints for applications of pupillometry in daily life. Our findings raise an under-examined but crucial issue when designing and understanding listening effort studies using pupillometry, and offer important insights to future clinical application of pupillometry across sites.

Identifiants

pubmed: 36459511
doi: 10.1371/journal.pone.0278506
pii: PONE-D-22-07841
pmc: PMC9718387
doi:

Types de publication

Journal Article Research Support, Non-U.S. Gov't

Langues

eng

Sous-ensembles de citation

IM

Pagination

e0278506

Informations de copyright

Copyright: © 2022 Zhang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Déclaration de conflit d'intérêts

The authors have declared that no competing interests exist.

Références

Trends Hear. 2018 Jan-Dec;22:2331216518808962
pubmed: 30375282
PLoS One. 2018 Jun 13;13(6):e0197739
pubmed: 29897946
Psychophysiology. 1975 Jan;12(1):90-3
pubmed: 1114216
Trends Hear. 2021 Jan-Dec;25:23312165211013256
pubmed: 34024219
Sci Rep. 2022 Jan 26;12(1):1390
pubmed: 35082319
PLoS One. 2021 Mar 3;16(3):e0233251
pubmed: 33657100
Trends Hear. 2016 Oct 3;20:
pubmed: 27698260
Science. 1964 Mar 13;143(3611):1190-2
pubmed: 17833905
Int J Psychophysiol. 2004 Mar;52(1):77-86
pubmed: 15003374
Hear Res. 2017 Aug;351:68-79
pubmed: 28622894
Compr Physiol. 2015 Jan;5(1):439-73
pubmed: 25589275
Hear Res. 2014 Jun;312:114-20
pubmed: 24709275
PLoS One. 2016 Apr 18;11(4):e0153566
pubmed: 27089436
Hear Res. 2018 Aug;365:90-99
pubmed: 29779607
Int J Psychophysiol. 2017 Feb;112:40-45
pubmed: 27979740
Psychophysiology. 2010 May 1;47(3):560-9
pubmed: 20070575
Ear Hear. 2015 Jul-Aug;36(4):e153-65
pubmed: 25654299
Psychol Bull. 1982 Mar;91(2):276-92
pubmed: 7071262
Trends Hear. 2018 Jan-Dec;22:2331216518800869
pubmed: 30261825
Ear Hear. 2012 Mar-Apr;33(2):291-300
pubmed: 21921797
Int J Audiol. 2021 Oct;60(10):762-772
pubmed: 33320028
Psychophysiology. 1996 Jul;33(4):457-61
pubmed: 8753946
Int J Audiol. 2005 Jun;44(6):358-69
pubmed: 16078731
Ear Hear. 2011 Jul-Aug;32(4):498-510
pubmed: 21233711
Trends Hear. 2018 Jan-Dec;22:2331216518777174
pubmed: 30249172
Hear Res. 2021 Oct;410:108348
pubmed: 34543837
Behav Res Methods. 2018 Feb;50(1):94-106
pubmed: 29330763
J Cogn. 2018 Feb 21;1(1):16
pubmed: 31517190
Front Psychol. 2014 Mar 13;5:218
pubmed: 24659980
Int J Psychophysiol. 2015 Jul;97(1):30-7
pubmed: 25941013
Trends Hear. 2021 Jan-Dec;25:23312165211009351
pubmed: 33926329
Ear Hear. 2017 Nov/Dec;38(6):690-700
pubmed: 28640038
J Mem Lang. 2008 Nov;59(4):434-446
pubmed: 19884961
Psychophysiology. 2014 Mar;51(3):277-84
pubmed: 24506437
Sci Rep. 2021 Jan 12;11(1):707
pubmed: 33436889
Behav Res Methods. 2019 Apr;51(2):865-878
pubmed: 30264368
Ear Hear. 2021 Nov-Dec 01;42(6):1668-1679
pubmed: 33859121

Auteurs

Yue Zhang (Y)

Department of Otolaryngology, McGill University, Montreal, Canada.
Centre for Research on Brain, Language and Music, Montreal, Canada.
Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada.

Florian Malaval (F)

Department of Otolaryngology, McGill University, Montreal, Canada.

Alexandre Lehmann (A)

Department of Otolaryngology, McGill University, Montreal, Canada.
Centre for Research on Brain, Language and Music, Montreal, Canada.
Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada.

Mickael L D Deroche (MLD)

Department of Otolaryngology, McGill University, Montreal, Canada.
Centre for Research on Brain, Language and Music, Montreal, Canada.
Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada.
Department of Psychology, Concordia University, Montreal, Canada.

Articles similaires

Humans Magnetic Resonance Imaging Phantoms, Imaging Infant, Newborn Signal-To-Noise Ratio
Humans Citrus Female Male Aged
Humans Aged Middle Aged Adult Attention
Humans Infant, Newborn Pain Measurement Pain Cognition

Classifications MeSH