Face analysis from 2D images and videos is a central task in many multimedia applications. Methods developed to this end perform either face recognition or facial expression recognition, and in both cases results are negatively influenced by variations in pose, illumination and resolution of the face. Such variations have a lower impact on 3D face data, which has given the way to the idea of using a 3D Morphable Model as an intermediate tool to enhance face analysis on 2D data. In this paper, we propose a new approach for constructing a 3D Morphable Shape Model (called DL-3DMM) and show our solution can reach the accuracy of deformation required in applications where fine details of the face are concerned. For constructing the model, we start from a set of 3D face scans with large variability in terms of ethnicity and expressions. Across these training scans, we compute a point-to-point dense alignment, which is accurate also in the presence of topological variations of the face. The DL-3DMM is constructed by learning a dictionary of basis components on the aligned scans. The model is then fitted to 2D target faces using an efficient regularized ridge-regression guided by 2D/3D facial landmark correspondences in order to generate pose-normalized face images. Comparison between the DL-3DMM and the standard PCA-based 3DMM demonstrates that in general a lower reconstruction error can be obtained with our solution. Application to action unit detection and emotion recognition from 2D images and videos shows competitive results with state of the art methods on two benchmark datasets.

A Dictionary Learning based 3D Morphable Shape Model / Ferrari, Claudio; Lisanti, Giuseppe; Berretti, Stefano; DEL BIMBO, Alberto. - In: IEEE TRANSACTIONS ON MULTIMEDIA. - ISSN 1520-9210. - STAMPA. - 19:(2017), pp. 2666-2679. [10.1109/TMM.2017.2707341]

A Dictionary Learning based 3D Morphable Shape Model

FERRARI, CLAUDIO;LISANTI, GIUSEPPE;BERRETTI, STEFANO;DEL BIMBO, ALBERTO
2017

Abstract

Face analysis from 2D images and videos is a central task in many multimedia applications. Methods developed to this end perform either face recognition or facial expression recognition, and in both cases results are negatively influenced by variations in pose, illumination and resolution of the face. Such variations have a lower impact on 3D face data, which has given the way to the idea of using a 3D Morphable Model as an intermediate tool to enhance face analysis on 2D data. In this paper, we propose a new approach for constructing a 3D Morphable Shape Model (called DL-3DMM) and show our solution can reach the accuracy of deformation required in applications where fine details of the face are concerned. For constructing the model, we start from a set of 3D face scans with large variability in terms of ethnicity and expressions. Across these training scans, we compute a point-to-point dense alignment, which is accurate also in the presence of topological variations of the face. The DL-3DMM is constructed by learning a dictionary of basis components on the aligned scans. The model is then fitted to 2D target faces using an efficient regularized ridge-regression guided by 2D/3D facial landmark correspondences in order to generate pose-normalized face images. Comparison between the DL-3DMM and the standard PCA-based 3DMM demonstrates that in general a lower reconstruction error can be obtained with our solution. Application to action unit detection and emotion recognition from 2D images and videos shows competitive results with state of the art methods on two benchmark datasets.
2017
19
2666
2679
Ferrari, Claudio; Lisanti, Giuseppe; Berretti, Stefano; DEL BIMBO, Alberto
File in questo prodotto:
File Dimensione Formato  
tmm17.pdf

Accesso chiuso

Descrizione: articolo principale
Tipologia: Pdf editoriale (Version of record)
Licenza: DRM non definito
Dimensione 1.23 MB
Formato Adobe PDF
1.23 MB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1086597
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 39
  • ???jsp.display-item.citation.isi??? 19
social impact