Given the huge quantity of hours of video available on video sharing platforms such as YouTube, Vimeo, etc. development of automatic tools that help users nd videos that t their interests has attracted the attention of both scienti c and industrial communities. So far the majority of the works have addressed semantic analysis, to identify objects, scenes and events depicted in videos, but more recently a ective analysis of videos has started to gain more at- tention. In this work we investigate the use of sentiment driven features to classify the induced sentiment of a video, i.e. the senti- ment reaction of the user. Instead of using standard computer vision features such as CNN features or SIFT features trained to recognize objects and scenes, we exploit sentiment related features such as the ones provided by Deep-SentiBank [4], and features extracted from models that exploit deep networks trained on face expressions. We experiment on two recently introduced datasets: LIRIS-ACCEDE [2] and MEDIAEVAL-2015, that provide sentiment annotations of a large set of short videos. We show that our approach not only outperforms the current state-of-the-art in terms of valence and arousal classi cation accuracy, but it also uses a smaller number of features, requiring thus less video processing.

Deep Sentiment Features of Context and Faces for Affective Video Analysis / Baecchi, Claudio; Uricchio, Tiberio; Bertini, Marco; DEL BIMBO, Alberto. - ELETTRONICO. - (2017), pp. 72-77. (Intervento presentato al convegno International Conference on Multimedia Retrieval tenutosi a Bucharest nel 6-9 June) [10.1145/3078971.3079027].

Deep Sentiment Features of Context and Faces for Affective Video Analysis

BAECCHI, CLAUDIO;URICCHIO, TIBERIO;BERTINI, MARCO;DEL BIMBO, ALBERTO
2017

Abstract

Given the huge quantity of hours of video available on video sharing platforms such as YouTube, Vimeo, etc. development of automatic tools that help users nd videos that t their interests has attracted the attention of both scienti c and industrial communities. So far the majority of the works have addressed semantic analysis, to identify objects, scenes and events depicted in videos, but more recently a ective analysis of videos has started to gain more at- tention. In this work we investigate the use of sentiment driven features to classify the induced sentiment of a video, i.e. the senti- ment reaction of the user. Instead of using standard computer vision features such as CNN features or SIFT features trained to recognize objects and scenes, we exploit sentiment related features such as the ones provided by Deep-SentiBank [4], and features extracted from models that exploit deep networks trained on face expressions. We experiment on two recently introduced datasets: LIRIS-ACCEDE [2] and MEDIAEVAL-2015, that provide sentiment annotations of a large set of short videos. We show that our approach not only outperforms the current state-of-the-art in terms of valence and arousal classi cation accuracy, but it also uses a smaller number of features, requiring thus less video processing.
2017
Proc. of the ACM on International Conference on Multimedia Retrieval (ICMR 2017)
International Conference on Multimedia Retrieval
Bucharest
6-9 June
Baecchi, Claudio; Uricchio, Tiberio; Bertini, Marco; DEL BIMBO, Alberto
File in questo prodotto:
File Dimensione Formato  
p72-baecchi.pdf

accesso aperto

Tipologia: Pdf editoriale (Version of record)
Licenza: Tutti i diritti riservati
Dimensione 2.89 MB
Formato Adobe PDF
2.89 MB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1092380
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 16
  • ???jsp.display-item.citation.isi??? 9
social impact