Tonal Language Speakers Are Better Able to Segregate Competing Speech According to Talker Sex Differences.
Journal
Journal of speech, language, and hearing research : JSLHR
ISSN: 1558-9102
Titre abrégé: J Speech Lang Hear Res
Pays: United States
ID NLM: 9705610
Informations de publication
Date de publication:
10 08 2020
10 08 2020
Historique:
pubmed:
22
7
2020
medline:
22
6
2021
entrez:
22
7
2020
Statut:
ppublish
Résumé
Purpose The aim of this study was to compare release from masking (RM) between Mandarin-speaking and English-speaking listeners with normal hearing for competing speech when target-masker sex cues, spatial cues, or both were available. Method Speech recognition thresholds (SRTs) for competing speech were measured in 21 Mandarin-speaking and 15 English-speaking adults with normal hearing using a modified coordinate response measure task. SRTs were measured for target sentences produced by a male talker in the presence of two masker talkers (different male talkers or female talkers). The target sentence was always presented directly in front of the listener, and the maskers were either colocated with the target or were spatially separated from the target (+90°, -90°). Stimuli were presented via headphones and were virtually spatialized using head-related transfer functions. Three masker conditions were used to measure RM relative to the baseline condition: (a) talker sex cues, (b) spatial cues, or (c) combined talker sex and spatial cues. Results The results showed large amounts of RM according to talker sex and/or spatial cues. There was no significant difference in SRTs between Chinese and English listeners for the baseline condition, where no talker sex or spatial cues were available. Furthermore, there was no significant difference in RM between Chinese and English listeners when spatial cues were available. However, RM was significantly larger for Chinese listeners when talker sex cues or combined talker sex and spatial cues were available. Conclusion Listeners who speak a tonal language such as Mandarin Chinese may be able to take greater advantage of talker sex cues than listeners who do not speak a tonal language.
Identifiants
pubmed: 32692939
doi: 10.1044/2020_JSLHR-19-00421
pmc: PMC7872724
doi:
Types de publication
Journal Article
Research Support, N.I.H., Extramural
Langues
eng
Sous-ensembles de citation
IM
Pagination
2801-2810Subventions
Organisme : NIDCD NIH HHS
ID : R01 DC016883
Pays : United States
Références
J Acoust Soc Am. 2008 Nov;124(5):3064-75
pubmed: 19045792
J Acoust Soc Am. 2001 Nov;110(5 Pt 1):2527-38
pubmed: 11757942
J Acoust Soc Am. 2003 Jun;113(6):2984-7
pubmed: 12822768
J Acoust Soc Am. 1998 Jul;104(1):422-31
pubmed: 9670534
J Acoust Soc Am. 1999 Jun;105(6):3436-48
pubmed: 10380667
Philos Trans R Soc Lond B Biol Sci. 2012 Apr 5;367(1591):919-31
pubmed: 22371614
J Am Acad Audiol. 2009 Feb;20(2):128-46
pubmed: 19927676
J Speech Lang Hear Res. 2014 Oct;57(5):2005-23
pubmed: 24950448
J Acoust Soc Am. 2003 Nov;114(5):2913-22
pubmed: 14650025
Trends Hear. 2020 Jan-Dec;24:2331216520916106
pubmed: 32324486
J Acoust Soc Am. 2011 Jan;129(1):368-75
pubmed: 21303017
J Acoust Soc Am. 2006 Oct;120(4):2177-89
pubmed: 17069314
J Acoust Soc Am. 2018 Aug;144(2):EL131
pubmed: 30180674
J Acoust Soc Am. 2019 Jul;146(1):548
pubmed: 31370625
J Acoust Soc Am. 2016 Jul;140(1):132
pubmed: 27475139
J Am Acad Audiol. 2008 May;19(5):377-91
pubmed: 19253811
J Acoust Soc Am. 2001 May;109(5 Pt 1):2112-22
pubmed: 11386563
Percept Psychophys. 2002 Nov;64(8):1325-36
pubmed: 12519029
Atten Percept Psychophys. 2009 Jul;71(5):1150-66
pubmed: 19525544
J Acoust Soc Am. 2016 Mar;139(3):EL51-6
pubmed: 27036287
J Acoust Soc Am. 2010 Oct;128(4):1965-78
pubmed: 20968368
J Acoust Soc Am. 2001 Mar;109(3):1101-9
pubmed: 11303924
J Acoust Soc Am. 1988 Apr;83(4):1508-16
pubmed: 3372866
Trends Hear. 2019 Jan-Dec;23:2331216519854597
pubmed: 31172880
Ear Hear. 2017 Jan/Feb;38(1):e13-e21
pubmed: 27556520
J Acoust Soc Am. 1989 Feb;85(2):868-78
pubmed: 2926001
J Acoust Soc Am. 2015 Sep;138(3):EL347-51
pubmed: 26428838
Ear Hear. 2007 Apr;28(2):196-211
pubmed: 17496671
J Acoust Soc Am. 2008 Jan;123(1):450-61
pubmed: 18177173
J Acoust Soc Am. 2016 Jul;140(1):EL73
pubmed: 27475216
J Acoust Soc Am. 2004 Aug;116(2):1057-65
pubmed: 15376672
J Acoust Soc Am. 1999 Dec;106(6):3578-88
pubmed: 10615698
J Acoust Soc Am. 2000 Feb;107(2):970-7
pubmed: 10687706
J Am Acad Audiol. 2011 Nov-Dec;22(10):697-709
pubmed: 22212768
J Acoust Soc Am. 2017 Mar;141(3):EL185
pubmed: 28372125
J Acoust Soc Am. 2015 Nov;138(5):3311-9
pubmed: 26627803
J Am Acad Audiol. 2010 Nov-Dec;21(10):629-41
pubmed: 21376004
J Acoust Soc Am. 2018 Apr;143(4):2128
pubmed: 29716260
J Acoust Soc Am. 2012 Aug;132(2):1100-12
pubmed: 22894229
J Acoust Soc Am. 2019 Jan;145(1):417
pubmed: 30710943
J Acoust Soc Am. 1994 Feb;95(2):1085-99
pubmed: 8132902
J Speech Hear Res. 1969 Mar;12(1):5-38
pubmed: 5779912
Ear Hear. 2015 Jan;36(1):24-41
pubmed: 25207850
J Acoust Soc Am. 2004 Feb;115(2):833-43
pubmed: 15000195
Hear Res. 2005 Jan;199(1-2):1-10
pubmed: 15574295
Speech Commun. 2017 Sep;92:125-131
pubmed: 29200541
Sci Rep. 2019 Jan 14;9(1):109
pubmed: 30643156
Trends Hear. 2016 Nov 11;20:
pubmed: 27837051
J Acoust Soc Am. 2015 Jan;137(1):419-32
pubmed: 25618071
J Acoust Soc Am. 2004 Nov;116(5):3090-8
pubmed: 15603154
J Speech Lang Hear Res. 2011 Dec;54(6):1506-24
pubmed: 22180019
J Acoust Soc Am. 2009 Feb;125(2):1114-24
pubmed: 19206886
Audiol Res. 2012 Sep 18;2(1):e15
pubmed: 26557330
Int J Audiol. 2006;45 Suppl 1:S25-33
pubmed: 16938772