This paper proposes a new framework for online detection of spontaneous emotions from low-resolution depth sequences of the upper part of the body. To face the challenges of this scenario, depth videos are decomposed into subsequences, each modeled as a linear subspace, which in turn is represented as a point on a Grassmann manifold. Modeling the temporal evolution of distances between subsequences of the underlying manifold as a one-dimensional signature, termed Geometric Motion History, permits us to encompass the temporal signature into an early detection framework using Structured Output SVM, thus enabling online emotion detection. Results obtained on the publicly available Cam3D Kinect database validate the proposed solution, also demonstrating that the upper body, instead of the face alone, can improve the performance of emotion detection.

Analyzing Trajectories on Grassmann Manifold for Early Emotion Detection from Depth Videos / Alashkar, T.; Ben Amor, B.; Daoudi, M.; Berretti, S.. - STAMPA. - (2015), pp. 1-6. (Intervento presentato al convegno IEEE International Conference on Automatic Face and Gesture Recognition tenutosi a Ljubljana nel 4-8 May 2015) [10.1109/FG.2015.7163122].

Analyzing Trajectories on Grassmann Manifold for Early Emotion Detection from Depth Videos

BERRETTI, STEFANO
2015

Abstract

This paper proposes a new framework for online detection of spontaneous emotions from low-resolution depth sequences of the upper part of the body. To face the challenges of this scenario, depth videos are decomposed into subsequences, each modeled as a linear subspace, which in turn is represented as a point on a Grassmann manifold. Modeling the temporal evolution of distances between subsequences of the underlying manifold as a one-dimensional signature, termed Geometric Motion History, permits us to encompass the temporal signature into an early detection framework using Structured Output SVM, thus enabling online emotion detection. Results obtained on the publicly available Cam3D Kinect database validate the proposed solution, also demonstrating that the upper body, instead of the face alone, can improve the performance of emotion detection.
2015
IEEE International Conference on Automatic Face and Gesture Recognition
IEEE International Conference on Automatic Face and Gesture Recognition
Ljubljana
4-8 May 2015
Alashkar, T.; Ben Amor, B.; Daoudi, M.; Berretti, S.
File in questo prodotto:
File Dimensione Formato  
fg15.pdf

Accesso chiuso

Descrizione: file in postprint
Tipologia: Versione finale referata (Postprint, Accepted manuscript)
Licenza: Tutti i diritti riservati
Dimensione 978.59 kB
Formato Adobe PDF
978.59 kB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1008055
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact