The majority of recent face recognition systems are based on Deep Convolutional Neural Networks (DCNNs). These networks are trained on massive amounts of face images so as to learn a compact representation (deep descriptor) aimed at capturing the identity information. Recognition is then performed by computing some similarity (or distance) measure between descriptors. However, in practice, descriptors encode also other intra-class variabilities such as pose and expressions. This well-known problem is usually addressed by designing specific loss-functions or metric learning modules such that the learned descriptors maximize the inter-class (identity) distances and minimize the intra-class differences in the feature space. We tackle this problem from a different perspective by observing that descriptors associated with images of the same subject, on average, share similar patterns in the highest activation units. We demonstrate this assumption by showing that improved accuracy can be obtained in a template-based recognition scenario by retaining the descriptor bins with the average highest activation, and dropping all the others to zero. These activation patterns are also employed to build identity-representative binary masks that are effectively used in place of the descriptors to match templates. We investigate this strategy by performing experiments on the IJB-A dataset, and show that it can significantly boost the recognition accuracy.

Discovering Identity Specific Activation Patterns in Deep Descriptors for Template Based Face Recognition / Claudio Ferrari ; Stefano Berretti ; Alberto Del Bimbo. - ELETTRONICO. - (2019), pp. 1-5. (Intervento presentato al convegno IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019) tenutosi a Lille, France nel 14-18 May, 2019) [10.1109/FG.2019.8756604].

Discovering Identity Specific Activation Patterns in Deep Descriptors for Template Based Face Recognition

Claudio Ferrari;Stefano Berretti;Alberto Del Bimbo
2019

Abstract

The majority of recent face recognition systems are based on Deep Convolutional Neural Networks (DCNNs). These networks are trained on massive amounts of face images so as to learn a compact representation (deep descriptor) aimed at capturing the identity information. Recognition is then performed by computing some similarity (or distance) measure between descriptors. However, in practice, descriptors encode also other intra-class variabilities such as pose and expressions. This well-known problem is usually addressed by designing specific loss-functions or metric learning modules such that the learned descriptors maximize the inter-class (identity) distances and minimize the intra-class differences in the feature space. We tackle this problem from a different perspective by observing that descriptors associated with images of the same subject, on average, share similar patterns in the highest activation units. We demonstrate this assumption by showing that improved accuracy can be obtained in a template-based recognition scenario by retaining the descriptor bins with the average highest activation, and dropping all the others to zero. These activation patterns are also employed to build identity-representative binary masks that are effectively used in place of the descriptors to match templates. We investigate this strategy by performing experiments on the IJB-A dataset, and show that it can significantly boost the recognition accuracy.
2019
IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019)
IEEE International Conference on Automatic Face & Gesture Recognition (FG 2019)
Lille, France
14-18 May, 2019
Claudio Ferrari ; Stefano Berretti ; Alberto Del Bimbo
File in questo prodotto:
File Dimensione Formato  
fg19.pdf

Accesso chiuso

Descrizione: articolo principale
Tipologia: Versione finale referata (Postprint, Accepted manuscript)
Licenza: Tutti i diritti riservati
Dimensione 1.09 MB
Formato Adobe PDF
1.09 MB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1167257
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 3
social impact