Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, which mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a training phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of the same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used, on the one hand, to train an AU classifier, on the other, they can be applied to a 3D neutral scan to generate AU deformations in a subject-independent manner. The proposed approach for AU detection is validated on the Bosphorus dataset, reporting competitive results with respect to the state-of-the-art, even in a challenging cross dataset setting. We further show the learned coefficients are general enough to synthesize realistic 3D face instances with AUs activation.

Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model / Luigi Ariano, Claudio Ferrari, Stefano Berretti, Alberto Del Bimbo. - In: SENSORS. - ISSN 1424-8220. - ELETTRONICO. - 21:(2021), pp. 1-19. [10.3390/s21020589]

Action Unit Detection by Learning the Deformation Coefficients of a 3D Morphable Model

Claudio Ferrari;Stefano Berretti;Alberto Del Bimbo
2021

Abstract

Facial Action Units (AUs) correspond to the deformation/contraction of individual facial muscles or their combinations. As such, each AU affects just a small portion of the face, with deformations that are asymmetric in many cases. Generating and analyzing AUs in 3D is particularly relevant for the potential applications it can enable. In this paper, we propose a solution for 3D AU detection and synthesis by developing on a newly defined 3D Morphable Model (3DMM) of the face. Differently from most of the 3DMMs existing in the literature, which mainly model global variations of the face and show limitations in adapting to local and asymmetric deformations, the proposed solution is specifically devised to cope with such difficult morphings. During a training phase, the deformation coefficients are learned that enable the 3DMM to deform to 3D target scans showing neutral and facial expression of the same individual, thus decoupling expression from identity deformations. Then, such deformation coefficients are used, on the one hand, to train an AU classifier, on the other, they can be applied to a 3D neutral scan to generate AU deformations in a subject-independent manner. The proposed approach for AU detection is validated on the Bosphorus dataset, reporting competitive results with respect to the state-of-the-art, even in a challenging cross dataset setting. We further show the learned coefficients are general enough to synthesize realistic 3D face instances with AUs activation.
2021
21
1
19
Goal 9: Industry, Innovation, and Infrastructure
Luigi Ariano, Claudio Ferrari, Stefano Berretti, Alberto Del Bimbo
File in questo prodotto:
File Dimensione Formato  
sensors2021.pdf

Accesso chiuso

Descrizione: documento finale
Tipologia: Versione finale referata (Postprint, Accepted manuscript)
Licenza: DRM non definito
Dimensione 7.37 MB
Formato Adobe PDF
7.37 MB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1221590
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 7
  • ???jsp.display-item.citation.isi??? 4
social impact