3D cameras for face capturing are quite common today thanks to their ease of use and affordable cost. The depth information they provide is mainly used to enhance face pose estimation and tracking, and face-background segmentation, while applications that require finer face details are usually not possible due to the low-resolution data acquired by such devices. In this paper, we propose a framework that allows us to derive high-quality 3D models of the face starting from corresponding low-resolution depth sequences acquired with a depth camera. To this end, we start by defining a solution that exploits temporal redundancy in a short-sequence of adjacent depth frames to remove most of the acquisition noise and produce an aggregated point cloud output with intermediate level details. Then, using a 3DMM specifically designed to support local and expression-related deformations of the face, we propose a two-steps 3DMM fitting solution: initially the model is deformed under the effect of landmarks correspondences; subsequently, it is iteratively refined using points closeness updating guided by a mean-square optimization. Preliminary results show that the proposed solution is able to derive 3D models of the face with high visual quality; quantitative results also evidence the superiority of our approach with respect to methods that use one step fitting based on landmarks.

3D face reconstruction from RGB-D data by morphable model to point cloud dense fitting / Ferrari C.; Berretti S.; Pala P.; Del Bimbo A.. - ELETTRONICO. - (2019), pp. 728-735. ((Intervento presentato al convegno 8th International Conference on Pattern Recognition Applications and Methods, ICPRAM 2019 tenutosi a Praga, Repubblica Ceca nel 2019 [10.5220/0007521007280735].

3D face reconstruction from RGB-D data by morphable model to point cloud dense fitting

Ferrari C.;Berretti S.;Pala P.;Del Bimbo A.
2019

Abstract

3D cameras for face capturing are quite common today thanks to their ease of use and affordable cost. The depth information they provide is mainly used to enhance face pose estimation and tracking, and face-background segmentation, while applications that require finer face details are usually not possible due to the low-resolution data acquired by such devices. In this paper, we propose a framework that allows us to derive high-quality 3D models of the face starting from corresponding low-resolution depth sequences acquired with a depth camera. To this end, we start by defining a solution that exploits temporal redundancy in a short-sequence of adjacent depth frames to remove most of the acquisition noise and produce an aggregated point cloud output with intermediate level details. Then, using a 3DMM specifically designed to support local and expression-related deformations of the face, we propose a two-steps 3DMM fitting solution: initially the model is deformed under the effect of landmarks correspondences; subsequently, it is iteratively refined using points closeness updating guided by a mean-square optimization. Preliminary results show that the proposed solution is able to derive 3D models of the face with high visual quality; quantitative results also evidence the superiority of our approach with respect to methods that use one step fitting based on landmarks.
ICPRAM 2019 - Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods
8th International Conference on Pattern Recognition Applications and Methods, ICPRAM 2019
Praga, Repubblica Ceca
2019
Ferrari C.; Berretti S.; Pala P.; Del Bimbo A.
File in questo prodotto:
File Dimensione Formato  
ICPRAM2019.pdf

Accesso chiuso

Descrizione: articolo principale
Tipologia: Versione finale referata (Postprint, Accepted manuscript)
Licenza: DRM non definito
Dimensione 4.81 MB
Formato Adobe PDF
4.81 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2158/1167271
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 1
social impact