Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project).
acceptability
human-robot interaction
monitoring of behaviorand internal states of humans
non-verbal cues and expressiveness
Journal
Sensors (Basel, Switzerland)
ISSN: 1424-8220
Titre abrégé: Sensors (Basel)
Pays: Switzerland
ID NLM: 101204366
Informations de publication
Date de publication:
08 Apr 2022
08 Apr 2022
Historique:
received:
26
01
2022
revised:
03
04
2022
accepted:
07
04
2022
entrez:
23
4
2022
pubmed:
24
4
2022
medline:
27
4
2022
Statut:
epublish
Résumé
Emotion recognition skills are predicted to be fundamental features in social robots. Since facial detection and recognition algorithms are compute-intensive operations, it needs to identify methods that can parallelize the algorithmic operations for large-scale information exchange in real time. The study aims were to identify if traditional machine learning algorithms could be used to assess every user emotions separately, to relate emotion recognizing in two robotic modalities: static or motion robot, and to evaluate the acceptability and usability of assistive robot from an end-user point of view. Twenty-seven hospital employees (M = 12; F = 15) were recruited to perform the experiment showing 60 positive, negative, or neutral images selected in the International Affective Picture System (IAPS) database. The experiment was performed with the Pepper robot. Concerning experimental phase with Pepper in active mode, a concordant mimicry was programmed based on types of images (positive, negative, and neutral). During the experimentation, the images were shown by a tablet on robot chest and a web interface lasting 7 s for each slide. For each image, the participants were asked to perform a subjective assessment of the perceived emotional experience using the Self-Assessment Manikin (SAM). After participants used robotic solution, Almere model questionnaire (AMQ) and system usability scale (SUS) were administered to assess acceptability, usability, and functionality of robotic solution. Analysis wasperformed on video recordings. The evaluation of three types of attitude (positive, negative, andneutral) wasperformed through two classification algorithms of machine learning: k-nearest neighbors (KNN) and random forest (RF). According to the analysis of emotions performed on the recorded videos, RF algorithm performance wasbetter in terms of accuracy (mean ± sd = 0.98 ± 0.01) and execution time (mean ± sd = 5.73 ± 0.86 s) than KNN algorithm. By RF algorithm, all neutral, positive and negative attitudes had an equal and high precision (mean = 0.98) and F-measure (mean = 0.98). Most of the participants confirmed a high level of usability and acceptability of the robotic solution. RF algorithm performance was better in terms of accuracy and execution time than KNN algorithm. The robot was not a disturbing factor in the arousal of emotions.
Sections du résumé
BACKGROUND
BACKGROUND
Emotion recognition skills are predicted to be fundamental features in social robots. Since facial detection and recognition algorithms are compute-intensive operations, it needs to identify methods that can parallelize the algorithmic operations for large-scale information exchange in real time. The study aims were to identify if traditional machine learning algorithms could be used to assess every user emotions separately, to relate emotion recognizing in two robotic modalities: static or motion robot, and to evaluate the acceptability and usability of assistive robot from an end-user point of view.
METHODS
METHODS
Twenty-seven hospital employees (M = 12; F = 15) were recruited to perform the experiment showing 60 positive, negative, or neutral images selected in the International Affective Picture System (IAPS) database. The experiment was performed with the Pepper robot. Concerning experimental phase with Pepper in active mode, a concordant mimicry was programmed based on types of images (positive, negative, and neutral). During the experimentation, the images were shown by a tablet on robot chest and a web interface lasting 7 s for each slide. For each image, the participants were asked to perform a subjective assessment of the perceived emotional experience using the Self-Assessment Manikin (SAM). After participants used robotic solution, Almere model questionnaire (AMQ) and system usability scale (SUS) were administered to assess acceptability, usability, and functionality of robotic solution. Analysis wasperformed on video recordings. The evaluation of three types of attitude (positive, negative, andneutral) wasperformed through two classification algorithms of machine learning: k-nearest neighbors (KNN) and random forest (RF).
RESULTS
RESULTS
According to the analysis of emotions performed on the recorded videos, RF algorithm performance wasbetter in terms of accuracy (mean ± sd = 0.98 ± 0.01) and execution time (mean ± sd = 5.73 ± 0.86 s) than KNN algorithm. By RF algorithm, all neutral, positive and negative attitudes had an equal and high precision (mean = 0.98) and F-measure (mean = 0.98). Most of the participants confirmed a high level of usability and acceptability of the robotic solution.
CONCLUSIONS
CONCLUSIONS
RF algorithm performance was better in terms of accuracy and execution time than KNN algorithm. The robot was not a disturbing factor in the arousal of emotions.
Identifiants
pubmed: 35458845
pii: s22082861
doi: 10.3390/s22082861
pmc: PMC9031388
pii:
doi:
Types de publication
Journal Article
Langues
eng
Sous-ensembles de citation
IM
Références
J Psychiatr Res. 1975 Nov;12(3):189-98
pubmed: 1202204
Front Robot AI. 2020 Oct 19;7:121
pubmed: 33501287
Front Psychiatry. 2022 Jan 27;12:799029
pubmed: 35153864
Science. 2015 Jul 17;349(6245):261-6
pubmed: 26185244
Comput Intell Neurosci. 2022 Feb 2;2022:8032673
pubmed: 35154306
Sci Robot. 2018 Jun 27;3(19):
pubmed: 33141688
JAMA. 2013 Nov 27;310(20):2191-4
pubmed: 24141714
Neurology. 1994 Dec;44(12):2308-14
pubmed: 7991117
Front Psychol. 2019 Apr 30;10:939
pubmed: 31114525
Cogn Process. 2009 Aug;10(3):193-7
pubmed: 19565283
Internist (Berl). 2008 Jun;49(6):688-93
pubmed: 18511988