We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework. To induce a ranking of cropped images, we use the observation that any sub-image of a crowded scene image is guaranteed to contain the same number or fewer persons than the super-image. This allows us to address the problem of limited size of existing datasets for crowd counting. We collect two crowd scene datasets from Google using keyword searches and query-by-example image retrieval, respectively. We demonstrate how to efficiently learn from these unlabeled datasets by incorporating learning-to-rank in a multi-task network which simultaneously ranks images and estimates crowd density maps. Experiments on two of the most challenging crowd counting datasets show that our approach obtains state-of-the-art results.
Leveraging Unlabeled Data for Crowd Counting by Learning to Rank / Liu, Xialei; van de Weijer, Joost; Bagdanov, Andrew D.. - ELETTRONICO. - (2018), pp. 7661-7669. (Intervento presentato al convegno IEEE Conference on Computer Vision and Pattern Recognition (CVPR)) [10.1109/CVPR.2018.00799].
Leveraging Unlabeled Data for Crowd Counting by Learning to Rank
Bagdanov, Andrew D.
2018
Abstract
We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework. To induce a ranking of cropped images, we use the observation that any sub-image of a crowded scene image is guaranteed to contain the same number or fewer persons than the super-image. This allows us to address the problem of limited size of existing datasets for crowd counting. We collect two crowd scene datasets from Google using keyword searches and query-by-example image retrieval, respectively. We demonstrate how to efficiently learn from these unlabeled datasets by incorporating learning-to-rank in a multi-task network which simultaneously ranks images and estimates crowd density maps. Experiments on two of the most challenging crowd counting datasets show that our approach obtains state-of-the-art results.I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.