In this chapter we present original approaches for the development of a smart audio guide that adapts to the actions and interests of visitors of cultural heritage sites and exhibitions either in indoor or outdoor scenarios. The guide is capable of perceiving the context. It understands what the user is looking at, if he is moving or is inattentive (e.g. talking with someone), in order to provide relevant information at the appropriate timing. Automatic recognition of artworks is performed with different approaches depending on the scenario, i.e. indoor and outdoor. These approaches are respectively based on Convolutional Neural Network (CNN) and SIFT descriptors, performing, when appropriated, object localization and classification. The com- puter vision system works in real-time on the mobile device, exploiting also a fusion of audio and motion sensors. Configurable interfaces to ease interaction and fruition of multimedia insights are provided for both scenarios. The audio-guide has been deployed on a NVIDIA Jetson TX1 and a NVIDIA Shield Tablet K1, tested in a real world environment (Bargello Museum of Florence and the historical city center of Florence), and evaluated with regard to system usability.

Wearable Systems for Improving Museum Experience / Lorenzo Seidenari, Claudio Baecchi, Tiberio Uricchio, Andrea Ferracani, Marco Bertini, Alberto Del Bimbo. - ELETTRONICO. - (2018), pp. 3-28.

Wearable Systems for Improving Museum Experience

Lorenzo Seidenari;Claudio Baecchi;Tiberio Uricchio;Andrea Ferracani;Marco Bertini;Alberto Del Bimbo
2018

Abstract

In this chapter we present original approaches for the development of a smart audio guide that adapts to the actions and interests of visitors of cultural heritage sites and exhibitions either in indoor or outdoor scenarios. The guide is capable of perceiving the context. It understands what the user is looking at, if he is moving or is inattentive (e.g. talking with someone), in order to provide relevant information at the appropriate timing. Automatic recognition of artworks is performed with different approaches depending on the scenario, i.e. indoor and outdoor. These approaches are respectively based on Convolutional Neural Network (CNN) and SIFT descriptors, performing, when appropriated, object localization and classification. The com- puter vision system works in real-time on the mobile device, exploiting also a fusion of audio and motion sensors. Configurable interfaces to ease interaction and fruition of multimedia insights are provided for both scenarios. The audio-guide has been deployed on a NVIDIA Jetson TX1 and a NVIDIA Shield Tablet K1, tested in a real world environment (Bargello Museum of Florence and the historical city center of Florence), and evaluated with regard to system usability.
2018
9780128146019
Multi-Modal Behavioral Analysis in the Wild: Advances and Challenges
3
28
Lorenzo Seidenari, Claudio Baecchi, Tiberio Uricchio, Andrea Ferracani, Marco Bertini, Alberto Del Bimbo
File in questo prodotto:
File Dimensione Formato  
Book.pdf

accesso aperto

Tipologia: Pdf editoriale (Version of record)
Licenza: Tutti i diritti riservati
Dimensione 5.8 MB
Formato Adobe PDF
5.8 MB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1113104
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact