We present a smart audio guide that adapts itself to the environment the user is navigating into. The system builds automatically a point of interest database exploiting Wikipedia and Google APIs as source. We rely on a computer vision system, to overcome the likely sensor limitations, and determine with high accuracy if the user is facing a certain landmark or if he is not facing any. Thanks to this the guide presents audio description at the most appropriate moment without any user intervention, using text-to-speech augmenting the experience.

Outdoor object recognition for smart audio guides

Uricchio, Tiberio;
2017-01-01

Abstract

We present a smart audio guide that adapts itself to the environment the user is navigating into. The system builds automatically a point of interest database exploiting Wikipedia and Google APIs as source. We rely on a computer vision system, to overcome the likely sensor limitations, and determine with high accuracy if the user is facing a certain landmark or if he is not facing any. Thanks to this the guide presents audio description at the most appropriate moment without any user intervention, using text-to-speech augmenting the experience.
2017
9781450349062
File in questo prodotto:
File Dimensione Formato  
OutdoorObjectRecognitionForSmartAudioGuides.pdf

solo utenti autorizzati

Licenza: Copyright dell'editore
Dimensione 2.27 MB
Formato Adobe PDF
2.27 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11393/313471
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact