Visual Speech Recognition: Improving Speech Perception in Noise through Artificial Intelligence.
artificial intelligence
computer vision
hearing loss
lip reading
speech perception
speech-in-noise
visual speech recognition
Journal
Otolaryngology--head and neck surgery : official journal of American Academy of Otolaryngology-Head and Neck Surgery
ISSN: 1097-6817
Titre abrégé: Otolaryngol Head Neck Surg
Pays: England
ID NLM: 8508176
Informations de publication
Date de publication:
10 2020
10 2020
Historique:
pubmed:
27
5
2020
medline:
15
12
2020
entrez:
27
5
2020
Statut:
ppublish
Résumé
To compare speech perception (SP) in noise for normal-hearing (NH) individuals and individuals with hearing loss (IWHL) and to demonstrate improvements in SP with use of a visual speech recognition program (VSRP). Single-institution prospective study. Tertiary referral center. Eleven NH and 9 IWHL participants in a sound-isolated booth facing a speaker through a window. In non-VSRP conditions, SP was evaluated on 40 Bamford-Kowal-Bench speech-in-noise test (BKB-SIN) sentences presented by the speaker at 50 A-weighted decibels (dBA) with multiperson babble noise presented from 50 to 75 dBA. SP was defined as the percentage of words correctly identified. In VSRP conditions, an infrared camera was used to track 35 points around the speaker's lips during speech in real time. Lip movement data were translated into speech-text via an in-house developed neural network-based VSRP. SP was evaluated similarly in the non-VSRP condition on 42 BKB-SIN sentences, with the addition of the VSRP output presented on a screen to the listener. In high-noise conditions (70-75 dBA) without VSRP, NH listeners achieved significantly higher speech perception than IWHL listeners (38.7% vs 25.0%, The VSRP significantly increased speech perception in high-noise conditions for NH and IWHL participants and eliminated the difference in SP accuracy between NH and IWHL listeners.
Identifiants
pubmed: 32453650
doi: 10.1177/0194599820924331
doi:
Types de publication
Journal Article
Research Support, Non-U.S. Gov't
Langues
eng
Sous-ensembles de citation
IM