The recognition of human gestures is crucial for improving the quality of human-robot cooperation. This article presents a system composed of a Pepper robot that mounts an RGB-D camera and an inertial device called SensHand. The system acquired data from twenty people who performed five daily living activities (i.e., Having Lunch, Personal Hygiene, Working, House Cleaning, Relax). The activities were composed of at least two 'basic' gestures for a total of 10 gestures. The data acquisition was performed by two cameras positioned laterally and frontally to mimic the real conditions. The acquired data were off-line classified considering different combinations of sensors to evaluate how the sensor fusion approach improves the recognition abilities. Specifically, the article presents an experimental study that evaluated four algorithms often used in computer vision, i.e., three classical machine learning and one belonging to the field of deep learning, namely Support Vector Machine, Random Forest, K-Nearest Neighbor and Long Short-Term Memory Recurrent Neural Network. The comparative analysis of the results shows a significant improvement of the accuracy when fusing camera and sensors data, i.e., 0.81 for the whole system configuration when the robot is in a frontal position with respect to the user (0.79 if we consider only the index finger sensors) and equal to 0.75 when the robot is in a lateral position. Interestingly, the system performs well in recognising the transitions between gestures when these are presented one after the other, a common event in the real-life that was often neglected in the previous studies.

Daily Gesture Recognition during Human-Robot Interaction Combining Vision and Wearable Systems / Fiorini L.; Loizzo F.G.C.; Sorrentino A.; Kim J.; Rovini E.; Di Nuovo A.; Cavallo F.. - In: IEEE SENSORS JOURNAL. - ISSN 1530-437X. - ELETTRONICO. - 21:(2021), pp. 23568-23577. [10.1109/JSEN.2021.3108011]

Daily Gesture Recognition during Human-Robot Interaction Combining Vision and Wearable Systems

Fiorini L.;Kim J.;Rovini E.;Cavallo F.
2021

Abstract

The recognition of human gestures is crucial for improving the quality of human-robot cooperation. This article presents a system composed of a Pepper robot that mounts an RGB-D camera and an inertial device called SensHand. The system acquired data from twenty people who performed five daily living activities (i.e., Having Lunch, Personal Hygiene, Working, House Cleaning, Relax). The activities were composed of at least two 'basic' gestures for a total of 10 gestures. The data acquisition was performed by two cameras positioned laterally and frontally to mimic the real conditions. The acquired data were off-line classified considering different combinations of sensors to evaluate how the sensor fusion approach improves the recognition abilities. Specifically, the article presents an experimental study that evaluated four algorithms often used in computer vision, i.e., three classical machine learning and one belonging to the field of deep learning, namely Support Vector Machine, Random Forest, K-Nearest Neighbor and Long Short-Term Memory Recurrent Neural Network. The comparative analysis of the results shows a significant improvement of the accuracy when fusing camera and sensors data, i.e., 0.81 for the whole system configuration when the robot is in a frontal position with respect to the user (0.79 if we consider only the index finger sensors) and equal to 0.75 when the robot is in a lateral position. Interestingly, the system performs well in recognising the transitions between gestures when these are presented one after the other, a common event in the real-life that was often neglected in the previous studies.
2021
21
23568
23577
Goal 3: Good health and well-being for people
Fiorini L.; Loizzo F.G.C.; Sorrentino A.; Kim J.; Rovini E.; Di Nuovo A.; Cavallo F.
File in questo prodotto:
File Dimensione Formato  
2021 - Daily_Gesture_Recognition_During_Human-Robot_Interaction_Combining_Vision_and_Wearable_Systems.pdf

accesso aperto

Tipologia: Versione finale referata (Postprint, Accepted manuscript)
Licenza: Open Access
Dimensione 2.81 MB
Formato Adobe PDF
2.81 MB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1255022
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 19
  • ???jsp.display-item.citation.isi??? 11
social impact