Recognizing human actions or analyzing human behaviors from 3D videos is an important problem currently investigated in many research domains. The high complexity of human motions and the variability of gesture combinations make this task challenging. Local (over time) analysis of a sequence is often necessary in order to have a more accurate and thorough understanding of what the human is doing. In this paper, we propose a method based on the combination of pose-based and segment-based approaches in order to segment an action sequence into motion units (MUs). We jointly analyze the shape of the human pose and the shape of its motion using a shape analysis framework that represents and compares shapes in a Riemannian manifold. On one hand, this allows us to detect periodic MUs and thus perform action segmentation. On another hand, we can remove repetitions of gestures in order to handle with failure cases for the task of action recognition. Experiments are performed on three representative datasets for the task of action segmentation and action recognition. Competitive results with state-of-the-art methods are obtained in both the tasks.

Combined Shape Analysis of Human Poses and Motion Units for Action Segmentation and Recognition / Devanne, M.; Wannous, H.; Pala, P.; Berretti, S.; Daoudi, M.; Del Bimbo, A.. - STAMPA. - (2015), pp. 1-6. (Intervento presentato al convegno 1st International Workshop on Understanding Human Activities through 3D Sensors (UHA3DS'15) tenutosi a Ljubljana, Slovenia nel May 4-8, 2015) [10.1109/FG.2015.7284880].

Combined Shape Analysis of Human Poses and Motion Units for Action Segmentation and Recognition

DEVANNE, MAXIME;PALA, PIETRO;BERRETTI, STEFANO;DEL BIMBO, ALBERTO
2015

Abstract

Recognizing human actions or analyzing human behaviors from 3D videos is an important problem currently investigated in many research domains. The high complexity of human motions and the variability of gesture combinations make this task challenging. Local (over time) analysis of a sequence is often necessary in order to have a more accurate and thorough understanding of what the human is doing. In this paper, we propose a method based on the combination of pose-based and segment-based approaches in order to segment an action sequence into motion units (MUs). We jointly analyze the shape of the human pose and the shape of its motion using a shape analysis framework that represents and compares shapes in a Riemannian manifold. On one hand, this allows us to detect periodic MUs and thus perform action segmentation. On another hand, we can remove repetitions of gestures in order to handle with failure cases for the task of action recognition. Experiments are performed on three representative datasets for the task of action segmentation and action recognition. Competitive results with state-of-the-art methods are obtained in both the tasks.
2015
IEEE International Conference on Automatic Face and Gesture Recognition Workshops
1st International Workshop on Understanding Human Activities through 3D Sensors (UHA3DS'15)
Ljubljana, Slovenia
May 4-8, 2015
Devanne, M.; Wannous, H.; Pala, P.; Berretti, S.; Daoudi, M.; Del Bimbo, A.
File in questo prodotto:
File Dimensione Formato  
fgwks15_maxime.pdf

Accesso chiuso

Descrizione: file in postprint
Tipologia: Versione finale referata (Postprint, Accepted manuscript)
Licenza: Tutti i diritti riservati
Dimensione 834.03 kB
Formato Adobe PDF
834.03 kB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1008041
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 17
  • ???jsp.display-item.citation.isi??? 3
social impact