People use non-verbal signals such as facial expressions, gestures, and body movements, to express their emotional intent or to convey an oral message clearly. Due to its importance, in the social robotics community, it emerged the interest of replicating this capability in robotic platforms as well. This work aims to endow a humanoid robot with proper co-speech gestures during an interactive storytelling activity with the end-user in assistive and more general settings. In this context, we implemented a GAN-based model that generates co-speech gestures given the text and audio features as input. The co-speech gestures were then performed by a Pepper robot and evaluated in an ad-hoc experimental setup. Namely, a cohort of 18 participants directly interacted with the robot Pepper, addressing three topics of conversation. The robot behaviours were rated in terms of similarity, appropriateness, natural- ness, amount of gesticulation, and expressiveness. The results highlighted that the gestures generated by the proposed model were human-like and appropriate to the content of speech. Investigating the user perception, it emerged that the evaluations did not statistically change among the different topics of conversation. Additionally, the user perception on robot’s capabilities was not affected by user’s personality and did not vary across multiple interactions. Indeed, the human-driven robot behaviours generated by the proposed model could foster interaction with the end-users, improving social relations, independently from the context interaction.

Design and Implementation of a Storytelling Robot: Preliminary Evaluation of a GAN-Based Model for Co-Speech Gesture Generation / Pugi, Lorenzo; Sorrentino, Alessandra; Fiorini, Laura; Cavallo, Filippo. - ELETTRONICO. - (2024), pp. 373-385. (Intervento presentato al convegno Italian Forum of Ambient Assisted Living) [10.1007/978-3-031-77318-1_25].

Design and Implementation of a Storytelling Robot: Preliminary Evaluation of a GAN-Based Model for Co-Speech Gesture Generation

Pugi, Lorenzo
;
Sorrentino, Alessandra;Fiorini, Laura;Cavallo, Filippo
2024

Abstract

People use non-verbal signals such as facial expressions, gestures, and body movements, to express their emotional intent or to convey an oral message clearly. Due to its importance, in the social robotics community, it emerged the interest of replicating this capability in robotic platforms as well. This work aims to endow a humanoid robot with proper co-speech gestures during an interactive storytelling activity with the end-user in assistive and more general settings. In this context, we implemented a GAN-based model that generates co-speech gestures given the text and audio features as input. The co-speech gestures were then performed by a Pepper robot and evaluated in an ad-hoc experimental setup. Namely, a cohort of 18 participants directly interacted with the robot Pepper, addressing three topics of conversation. The robot behaviours were rated in terms of similarity, appropriateness, natural- ness, amount of gesticulation, and expressiveness. The results highlighted that the gestures generated by the proposed model were human-like and appropriate to the content of speech. Investigating the user perception, it emerged that the evaluations did not statistically change among the different topics of conversation. Additionally, the user perception on robot’s capabilities was not affected by user’s personality and did not vary across multiple interactions. Indeed, the human-driven robot behaviours generated by the proposed model could foster interaction with the end-users, improving social relations, independently from the context interaction.
2024
Ambient Assisted Living. ForItAAL 2024.
Italian Forum of Ambient Assisted Living
Pugi, Lorenzo; Sorrentino, Alessandra; Fiorini, Laura; Cavallo, Filippo
File in questo prodotto:
File Dimensione Formato  
ForItAAL2024_Pugi.pdf

Accesso chiuso

Tipologia: Pdf editoriale (Version of record)
Licenza: Tutti i diritti riservati
Dimensione 1.09 MB
Formato Adobe PDF
1.09 MB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1413992
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact