Analysis of facial expressions is a task of increasing interest in Computer Vision, with many potential applications. However, collecting images with labeled expression for many subjects is a quite complicated operation. In this paper, we propose a solution that use a particular 3D morphable model (3DMM) that, starting from a neutral image of a target subject, is capable of producing a realistic expressive face image of the same subject. This is possible thanks to the fact the used 3DMM can effectively and efficiently fit to 2D images, and then deform itself under the action of deformation parameters that are learned expression-by-expression in a subject-independent manner. Ultimately, the application of such deformation parameters to the neutral model of a subject allows the rendering of realistic expressive images of the subject. In the experiments, we demonstrate that such deformation parameters can be learned even from a small set of training data using simple statistical tools; despite this simplicity, we show that very realistic subject-dependent expression renderings can be obtained with our method. Furthermore, robustness to cross dataset tests is also evidenced.

Learning 3DMM Deformation Coefficients for Rendering Realistic Expression Images / Claudio Ferrari, Stefano Berretti, Pietro Pala, Alberto Del Bimbo. - ELETTRONICO. - (2018), pp. 1-14. (Intervento presentato al convegno Interantional Conference on Smart Multimedia tenutosi a Tolone, Francia nel 24-25 Agosto 2018).

Learning 3DMM Deformation Coefficients for Rendering Realistic Expression Images

Claudio Ferrari;Stefano Berretti;Pietro Pala;Alberto Del Bimbo
2018

Abstract

Analysis of facial expressions is a task of increasing interest in Computer Vision, with many potential applications. However, collecting images with labeled expression for many subjects is a quite complicated operation. In this paper, we propose a solution that use a particular 3D morphable model (3DMM) that, starting from a neutral image of a target subject, is capable of producing a realistic expressive face image of the same subject. This is possible thanks to the fact the used 3DMM can effectively and efficiently fit to 2D images, and then deform itself under the action of deformation parameters that are learned expression-by-expression in a subject-independent manner. Ultimately, the application of such deformation parameters to the neutral model of a subject allows the rendering of realistic expressive images of the subject. In the experiments, we demonstrate that such deformation parameters can be learned even from a small set of training data using simple statistical tools; despite this simplicity, we show that very realistic subject-dependent expression renderings can be obtained with our method. Furthermore, robustness to cross dataset tests is also evidenced.
2018
Smart Multimedia
Interantional Conference on Smart Multimedia
Tolone, Francia
24-25 Agosto 2018
Claudio Ferrari, Stefano Berretti, Pietro Pala, Alberto Del Bimbo
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1138476
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? ND
social impact