Perception of incongruent audiovisual English consonants.


Journal

PloS one
ISSN: 1932-6203
Titre abrégé: PLoS One
Pays: United States
ID NLM: 101285081

Informations de publication

Date de publication:
2019
Historique:
received: 18 06 2018
accepted: 25 02 2019
entrez: 22 3 2019
pubmed: 22 3 2019
medline: 4 12 2019
Statut: epublish

Résumé

Causal inference-the process of deciding whether two incoming signals come from the same source-is an important step in audiovisual (AV) speech perception. This research explored causal inference and perception of incongruent AV English consonants. Nine adults were presented auditory, visual, congruent AV, and incongruent AV consonant-vowel syllables. Incongruent AV stimuli included auditory and visual syllables with matched vowels, but mismatched consonants. Open-set responses were collected. For most incongruent syllables, participants were aware of the mismatch between auditory and visual signals (59.04%) or reported the auditory syllable (33.73%). Otherwise, participants reported the visual syllable (1.13%) or some other syllable (6.11%). Statistical analyses were used to assess whether visual distinctiveness and place, voice, and manner features predicted responses. Mismatch responses occurred more when the auditory and visual consonants were visually distinct, when place and manner differed across auditory and visual consonants, and for consonants with high visual accuracy. Auditory responses occurred more when the auditory and visual consonants were visually similar, when place and manner were the same across auditory and visual stimuli, and with consonants produced further back in the mouth. Visual responses occurred more when voicing and manner were the same across auditory and visual stimuli, and for front and middle consonants. Other responses were variable, but typically matched the visual place, auditory voice, and auditory manner of the input. Overall, results indicate that causal inference and incongruent AV consonant perception depend on salience and reliability of auditory and visual inputs and degree of redundancy between auditory and visual inputs. A parameter-free computational model of incongruent AV speech perception based on unimodal confusions, with a causal inference rule, was applied. Data from the current study present an opportunity to test and improve the generalizability of current AV speech integration models.

Identifiants

pubmed: 30897109
doi: 10.1371/journal.pone.0213588
pii: PONE-D-18-18187
pmc: PMC6428273
doi:

Types de publication

Journal Article Research Support, N.I.H., Extramural

Langues

eng

Sous-ensembles de citation

IM

Pagination

e0213588

Subventions

Organisme : NIDCD NIH HHS
ID : F32 DC015387
Pays : United States
Organisme : NIDCD NIH HHS
ID : P30 DC004661
Pays : United States
Organisme : NIDCD NIH HHS
ID : R01 DC000396
Pays : United States
Organisme : NIDCD NIH HHS
ID : T32 DC005361
Pays : United States

Déclaration de conflit d'intérêts

The authors have declared that no competing interests exist.

Références

Percept Psychophys. 1978 Sep;24(3):253-7
pubmed: 704285
J Acoust Soc Am. 2007 Feb;121(2):1164-76
pubmed: 17348537
J Acoust Soc Am. 2000 Aug;108(2):784-9
pubmed: 10955645
PLoS One. 2007 Sep 26;2(9):e943
pubmed: 17895984
Percept Psychophys. 1981 Jun;29(6):578-84
pubmed: 7279586
J Speech Lang Hear Res. 1999 Feb;42(1):21-41
pubmed: 10025541
J Acoust Soc Am. 1998 May;103(5 Pt 1):2677-90
pubmed: 9604361
J Acoust Soc Am. 2000 Sep;108(3 Pt 1):1197-208
pubmed: 11008820
J Speech Hear Res. 1968 Dec;11(4):796-804
pubmed: 5719234
J Speech Hear Res. 1977 Mar;20(1):130-45
pubmed: 846196
J Acoust Soc Am. 1998 Oct;104(4):2438-50
pubmed: 10491705
Q J Exp Psychol A. 1991 Aug;43(3):647-77
pubmed: 1775661
J Acoust Soc Am. 2018 Oct;144(4):2462
pubmed: 30404465
Nature. 1976 Dec 23-30;264(5588):746-8
pubmed: 1012311
J Speech Hear Disord. 1976 Nov;41(4):530-9
pubmed: 994484
J Aud Res. 1986 Jan;26(1):27-41
pubmed: 3610989
Proc Natl Acad Sci U S A. 2005 Jan 25;102(4):1181-6
pubmed: 15647358
J Speech Hear Disord. 1975 Nov;40(4):481-92
pubmed: 1234963
J Speech Hear Res. 1972 Jun;15(2):413-22
pubmed: 5047880
PLoS One. 2011;6(5):e19812
pubmed: 21637344
Percept Psychophys. 1996 Apr;58(3):351-62
pubmed: 8935896
PLoS Comput Biol. 2017 Feb 16;13(2):e1005229
pubmed: 28207734
J Speech Hear Res. 1985 Sep;28(3):381-93
pubmed: 4046579
J Acoust Soc Am. 1985 Feb;77(2):678-85
pubmed: 3973239
J Speech Hear Res. 1981 Mar;24(1):32-43
pubmed: 7253626
Front Psychol. 2013 Jun 26;4:331
pubmed: 23805110
J Exp Psychol Hum Percept Perform. 2011 Aug;37(4):1193-209
pubmed: 21574741
PLoS One. 2009;4(3):e4638
pubmed: 19259259
Neuropsychologia. 2007 Feb 1;45(3):598-607
pubmed: 16530232

Auteurs

Kaylah Lalonde (K)

Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, United States of America.

Lynne A Werner (LA)

Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, United States of America.

Articles similaires

[Redispensing of expensive oral anticancer medicines: a practical application].

Lisanne N van Merendonk, Kübra Akgöl, Bastiaan Nuijen
1.00
Humans Antineoplastic Agents Administration, Oral Drug Costs Counterfeit Drugs

Smoking Cessation and Incident Cardiovascular Disease.

Jun Hwan Cho, Seung Yong Shin, Hoseob Kim et al.
1.00
Humans Male Smoking Cessation Cardiovascular Diseases Female
Humans United States Aged Cross-Sectional Studies Medicare Part C
1.00
Humans Yoga Low Back Pain Female Male

Classifications MeSH