Multi-modal analysis of infant cry types characterization: Acoustics, body language and brain signals.
Body language
Cry acoustics
EEG
NIRS
Newborns
Journal
Computers in biology and medicine
ISSN: 1879-0534
Titre abrégé: Comput Biol Med
Pays: United States
ID NLM: 1250250
Informations de publication
Date de publication:
Dec 2023
Dec 2023
Historique:
received:
04
07
2023
revised:
14
09
2023
accepted:
23
10
2023
medline:
27
11
2023
pubmed:
3
11
2023
entrez:
2
11
2023
Statut:
ppublish
Résumé
Infant crying is the first attempt babies use to communicate during their initial months of life. A misunderstanding of the cry message can compromise infant care and future neurodevelopmental process. An exploratory study collecting multimodal data (i.e., crying, electroencephalography (EEG), near-infrared spectroscopy (NIRS), facial expressions, and body movements) from 38 healthy full-term newborns was conducted. Cry types were defined based on different conditions (i.e., hunger, sleepiness, fussiness, need to burp, and distress). Statistical analysis, Machine Learning (ML), and Deep Learning (DL) techniques were used to identify relevant features for cry type classification and to evaluate a robust DL algorithm named Acoustic MultiStage Interpreter (AMSI). Significant differences were found across cry types based on acoustics, EEG, NIRS, facial expressions, and body movements. Acoustics and body language were identified as the most relevant ML features to support the cause of crying. The DL AMSI algorithm achieved an accuracy rate of 92%. This study set a precedent for cry analysis research by highlighting the complexity of newborn cry expression and strengthening the potential use of infant cry analysis as an objective, reliable, accessible, and non-invasive tool for cry interpretation, improving the infant-parent relationship and ensuring family well-being.
Sections du résumé
BACKGROUND
BACKGROUND
Infant crying is the first attempt babies use to communicate during their initial months of life. A misunderstanding of the cry message can compromise infant care and future neurodevelopmental process.
METHODS
METHODS
An exploratory study collecting multimodal data (i.e., crying, electroencephalography (EEG), near-infrared spectroscopy (NIRS), facial expressions, and body movements) from 38 healthy full-term newborns was conducted. Cry types were defined based on different conditions (i.e., hunger, sleepiness, fussiness, need to burp, and distress). Statistical analysis, Machine Learning (ML), and Deep Learning (DL) techniques were used to identify relevant features for cry type classification and to evaluate a robust DL algorithm named Acoustic MultiStage Interpreter (AMSI).
RESULTS
RESULTS
Significant differences were found across cry types based on acoustics, EEG, NIRS, facial expressions, and body movements. Acoustics and body language were identified as the most relevant ML features to support the cause of crying. The DL AMSI algorithm achieved an accuracy rate of 92%.
CONCLUSIONS
CONCLUSIONS
This study set a precedent for cry analysis research by highlighting the complexity of newborn cry expression and strengthening the potential use of infant cry analysis as an objective, reliable, accessible, and non-invasive tool for cry interpretation, improving the infant-parent relationship and ensuring family well-being.
Identifiants
pubmed: 37918262
pii: S0010-4825(23)01091-0
doi: 10.1016/j.compbiomed.2023.107626
pii:
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Pagination
107626Informations de copyright
Copyright © 2023 The Authors. Published by Elsevier Ltd.. All rights reserved.
Déclaration de conflit d'intérêts
Declaration of competing interest The authors declare competing interests (Funding, Employment or Confidentiality interests) in relation to the work described herein. Ana Laguna, Sandra Pusil, Àngel Bazán and Paolo Piras are employed by Zoundream AG. Ana Laguna is also a co-founder of the company and owns stock in Zoundream AG. Silvia Orlandi, Alexandra Pardos Véglia and Jonathan Adrian Zegarra-Valdivia receive compensation for the collaboration as members of the scientific advisory board of Zoundream AG. Clàudia Palomares’ salary is funded by Zoundream AG through Fundació Clínic. Anna Lucia Paltrinieri and Oscar Garcia-Algar declare no potential conflict of interest.