In this paper, we introduce a method to overcome one of the main challenges of person re-identification in multi-camera networks, namely cross-view appearance changes. The proposed solution addresses the extreme variability of person appearance in different camera views by exploiting multiple feature representations. For each feature, Kernel Canonical Correlation Analysis (KCCA) with different kernels is employed to learn several projection spaces in which the appearance correlation between samples of the same person observed from different cameras is maximized. An iterative logistic regression is finally used to select and weight the contributions of each projection and perform the matching between the two views. Experimental evaluation shows that the proposed solution obtains comparable performance on the VIPeR and PRID 450s datasets and improves on the PRID and CUHK01 datasets with respect to the state-of-the-art.
Multi Channel-Kernel Canonical Correlation Analysis for Cross-View Person Re-Identification / Lisanti, Giuseppe; Karaman, Svebor; Masi, Iacopo. - In: ACM TRANSACTIONS ON MULTIMEDIA COMPUTING, COMMUNICATIONS AND APPLICATIONS. - ISSN 1551-6857. - ELETTRONICO. - 13:(2017), pp. 1-19. [http://doi.acm.org/10.1145/3038916]
Multi Channel-Kernel Canonical Correlation Analysis for Cross-View Person Re-Identification
LISANTI, GIUSEPPE;KARAMAN, SVEBOR;MASI, IACOPO
2017
Abstract
In this paper, we introduce a method to overcome one of the main challenges of person re-identification in multi-camera networks, namely cross-view appearance changes. The proposed solution addresses the extreme variability of person appearance in different camera views by exploiting multiple feature representations. For each feature, Kernel Canonical Correlation Analysis (KCCA) with different kernels is employed to learn several projection spaces in which the appearance correlation between samples of the same person observed from different cameras is maximized. An iterative logistic regression is finally used to select and weight the contributions of each projection and perform the matching between the two views. Experimental evaluation shows that the proposed solution obtains comparable performance on the VIPeR and PRID 450s datasets and improves on the PRID and CUHK01 datasets with respect to the state-of-the-art.File | Dimensione | Formato | |
---|---|---|---|
TOMM-2016-0095.R1.final.pdf
solo utenti autorizzati
Tipologia:
Altro
Licenza:
Tutti i diritti riservati
Dimensione
2.03 MB
Formato
Adobe PDF
|
2.03 MB | Adobe PDF | Richiedi una copia |
I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.