Background The accurate (quantitative) analysis of face deformations in 3D is a problem of increasing interest for the many applications it may have. In particular, defining a 3D model of the face that can deform to a 2D target image, while capturing local and asymmetric deformations is still a challenge in the existing literature. Computing a measure of such local deformations may represent a relevant index for monitoring rehabilitation exercises that are used in Parkinson’s and Alzheimer’s disease or in recovering from a stroke. Methods In this study, we present a complete framework that allows the construction of a 3D Morphable Shape Model (3DMM) of the face and its fitting to a target RGB image. The model has the specific characteristic of being based on localized components of deformation; the fitting transformation is performed from 3D to 2D and is guided by the correspondence between landmarks detected in the target image and landmarks manually annotated on the average 3DMM. The fitting has also the peculiarity of being performed in two steps, disentangling face deformations that are due to the identity of the target subject from those induced by facial actions. Results In the experimental validation of the method, we used the MICC-3D dataset that includes 11 subjects each acquired in one neutral pose plus 18 facial actions that deform the face in localized and asymmetric ways. For each acquisition, we fit the 3DMM to an RGB frame with an apex facial action and to the neutral frame, and computed the extent of the deformation. Results indicated that the proposed approach can accurately capture the face deformation even for localized and asymmetric ones. Conclusions The proposed framework proved the idea of measuring the deformations of a reconstructed 3D face model to monitor the facial actions performed in response to a set of target ones. Interestingly, these results were obtained just using RGB targets without the need for 3D scans captured with costly devices. This opens the way to the use of the proposed tool for remote medical monitoring of rehabilitation.

Measuring 3D face deformations from RGB images of expression rehabilitation exercises / Ferrari C.; Berretti S.; Pala P.; Del Bimbo A.. - In: VIRTUAL REALITY & INTELLIGENT HARDWARE. - ISSN 2666-1209. - STAMPA. - 4:(2022), pp. 4.306-4.323. [10.1016/j.vrih.2022.05.004]

Measuring 3D face deformations from RGB images of expression rehabilitation exercises

Berretti S.;Pala P.;Del Bimbo A.
2022

Abstract

Background The accurate (quantitative) analysis of face deformations in 3D is a problem of increasing interest for the many applications it may have. In particular, defining a 3D model of the face that can deform to a 2D target image, while capturing local and asymmetric deformations is still a challenge in the existing literature. Computing a measure of such local deformations may represent a relevant index for monitoring rehabilitation exercises that are used in Parkinson’s and Alzheimer’s disease or in recovering from a stroke. Methods In this study, we present a complete framework that allows the construction of a 3D Morphable Shape Model (3DMM) of the face and its fitting to a target RGB image. The model has the specific characteristic of being based on localized components of deformation; the fitting transformation is performed from 3D to 2D and is guided by the correspondence between landmarks detected in the target image and landmarks manually annotated on the average 3DMM. The fitting has also the peculiarity of being performed in two steps, disentangling face deformations that are due to the identity of the target subject from those induced by facial actions. Results In the experimental validation of the method, we used the MICC-3D dataset that includes 11 subjects each acquired in one neutral pose plus 18 facial actions that deform the face in localized and asymmetric ways. For each acquisition, we fit the 3DMM to an RGB frame with an apex facial action and to the neutral frame, and computed the extent of the deformation. Results indicated that the proposed approach can accurately capture the face deformation even for localized and asymmetric ones. Conclusions The proposed framework proved the idea of measuring the deformations of a reconstructed 3D face model to monitor the facial actions performed in response to a set of target ones. Interestingly, these results were obtained just using RGB targets without the need for 3D scans captured with costly devices. This opens the way to the use of the proposed tool for remote medical monitoring of rehabilitation.
2022
4
306
323
Goal 9: Industry, Innovation, and Infrastructure
Ferrari C.; Berretti S.; Pala P.; Del Bimbo A.
File in questo prodotto:
File Dimensione Formato  
vrih2022.pdf

Accesso chiuso

Descrizione: Documento finale
Tipologia: Versione finale referata (Postprint, Accepted manuscript)
Licenza: Tutti i diritti riservati
Dimensione 2.88 MB
Formato Adobe PDF
2.88 MB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1289632
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
social impact