Can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
Journal
PloS one
ISSN: 1932-6203
Titre abrégé: PLoS One
Pays: United States
ID NLM: 101285081
Informations de publication
Date de publication:
2021
2021
Historique:
received:
25
01
2021
accepted:
06
07
2021
entrez:
10
8
2021
pubmed:
11
8
2021
medline:
25
11
2021
Statut:
epublish
Résumé
Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion "intense", are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression "intense" was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot's intentions or desires to humans.
Identifiants
pubmed: 34375327
doi: 10.1371/journal.pone.0254905
pii: PONE-D-21-02725
pmc: PMC8354482
doi:
Types de publication
Journal Article
Research Support, Non-U.S. Gov't
Langues
eng
Sous-ensembles de citation
IM
Pagination
e0254905Déclaration de conflit d'intérêts
The authors have declared that no competing interests exist.
Références
J Pers Soc Psychol. 1987 Oct;53(4):712-7
pubmed: 3681648
Nat Neurosci. 2002 Mar;5(3):277-83
pubmed: 11850635
Front Psychol. 2016 Jan 11;6:2030
pubmed: 26793147
Trends Cogn Sci. 2001 Sep 1;5(9):394-400
pubmed: 11520704
Sci Rep. 2018 May 30;8(1):8435
pubmed: 29849079
Proc Natl Acad Sci U S A. 2012 May 8;109(19):7241-4
pubmed: 22509011
Emotion. 2007 Aug;7(3):487-94
pubmed: 17683205
Perception. 2004;33(6):717-46
pubmed: 15330366
Front Psychol. 2020 Jun 04;11:1062
pubmed: 32581934
Nat Neurosci. 2003 Feb;6(2):196-202
pubmed: 12536208
Eur J Orthod. 2009 Oct;31(5):459-66
pubmed: 19541798
Proc Natl Acad Sci U S A. 2005 Nov 8;102(45):16518-23
pubmed: 16260734
Science. 2012 Nov 30;338(6111):1225-9
pubmed: 23197536
Soc Cogn Affect Neurosci. 2012 Apr;7(4):413-22
pubmed: 21515639
Trends Cogn Sci. 2007 Dec;11(12):504-13
pubmed: 18023604
Emotion. 2005 Mar;5(1):3-11
pubmed: 15755215
Proc Natl Acad Sci U S A. 2020 Aug 25;117(34):20868-20873
pubmed: 32764147