This work aims to endow a social robot with the ability to mimic affective empathy, which represents the skill of inferring the other's emotional state and mirroring the detected emotion. In this direction, we first designed a set of 52 facial expressions that may be representative of the primary emotions. Leveraging on the idea that the robot should learn the matching between emotion and its expressions directly from the final users, we modeled a deep reinforcement learning algorithm, and we recruited 105 users to train it, by rating the coherence of the robot's facial expressions on a web interface. A total of 22251 facial configurations were generated by the algorithm and rewarded by the pool of participants. The results proved that the algorithm exploited every facial configuration, converging towards a subset of 6 facial expressions at the end of the teaching process. Thus, we tested the trained empathetic model on a real robot (i.e. CloudIA robot) in a conversation scenario. The results collected through the interview and the questionnaires' analysis highlighted a general tendency of preferring the empathetic behavior of the robot, more than the not empathetic one. Namely, the robot endowed with empathetic behavior was perceived as more human-like and aware of the context of the interaction.
A Reinforcement Learning Framework to Foster Affective Empathy in Social Robots / Sorrentino, Alessandra; Assunção, Gustavo; Cavallo, Filippo; Fiorini, Laura; Menezes, Paulo. - ELETTRONICO. - 13817 LNAI:(2023), pp. 522-533. (Intervento presentato al convegno 14th International Conference on Social Robotics (ICSR 2022)) [10.1007/978-3-031-24667-8_46].
A Reinforcement Learning Framework to Foster Affective Empathy in Social Robots
Sorrentino, Alessandra
;Cavallo, Filippo;Fiorini, Laura;
2023
Abstract
This work aims to endow a social robot with the ability to mimic affective empathy, which represents the skill of inferring the other's emotional state and mirroring the detected emotion. In this direction, we first designed a set of 52 facial expressions that may be representative of the primary emotions. Leveraging on the idea that the robot should learn the matching between emotion and its expressions directly from the final users, we modeled a deep reinforcement learning algorithm, and we recruited 105 users to train it, by rating the coherence of the robot's facial expressions on a web interface. A total of 22251 facial configurations were generated by the algorithm and rewarded by the pool of participants. The results proved that the algorithm exploited every facial configuration, converging towards a subset of 6 facial expressions at the end of the teaching process. Thus, we tested the trained empathetic model on a real robot (i.e. CloudIA robot) in a conversation scenario. The results collected through the interview and the questionnaires' analysis highlighted a general tendency of preferring the empathetic behavior of the robot, more than the not empathetic one. Namely, the robot endowed with empathetic behavior was perceived as more human-like and aware of the context of the interaction.File | Dimensione | Formato | |
---|---|---|---|
ICSR2022_Sorrentino.pdf
Accesso chiuso
Tipologia:
Pdf editoriale (Version of record)
Licenza:
Tutti i diritti riservati
Dimensione
1.47 MB
Formato
Adobe PDF
|
1.47 MB | Adobe PDF | Richiedi una copia |
I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.