Nowadays, understanding and analysing visitors activities and behaviours is becoming imperative for personalising and improving the user experience in a museum environment. Users’ behaviour can provide important statistics, insights and objective information about their interactions, such as attraction, attention and action. These data represent a precious value for the museum curators, and they are one of the parameters that need to be assessed. These information are collected through manual approaches based on questionnaires or visual observations. This procedure is time consuming and can be affected by the subjective interpretation of the evaluator. From such premises, SeSAME (Senseable Self Adapting Museum Environment) a novel system for collecting and analysing the behaviours of visitors inside a museum environment is presented in this paper. SeSAME is based on a multi-modal deep neural network architecture able to extract anthropometric and appearance features from RGB-D videos acquired in crowded environments. Our approach has been tested on four different temporal modelling methods to aggregate a sequence of image-level features into clip-level features. This paper uses as a benchmark TVPR2, a public dataset of acquired videos with an RGB-D camera in a top-view configuration, in the presence of persistent and temporarily heavy occlusion. Moreover, a dataset specifically collected for this work has been acquired in a real museum environment, which is Palazzo Buonaccorsi, an important historical building in Macerata, in Marche Region in the center of Italy. During the experimental phase, the evaluation metrics show the effectiveness and the suitability of the proposed method.

SeSAME: Re-identification-based ambient intelligence system for museum environment

Paolanti M.;Frontoni E.
2022-01-01

Abstract

Nowadays, understanding and analysing visitors activities and behaviours is becoming imperative for personalising and improving the user experience in a museum environment. Users’ behaviour can provide important statistics, insights and objective information about their interactions, such as attraction, attention and action. These data represent a precious value for the museum curators, and they are one of the parameters that need to be assessed. These information are collected through manual approaches based on questionnaires or visual observations. This procedure is time consuming and can be affected by the subjective interpretation of the evaluator. From such premises, SeSAME (Senseable Self Adapting Museum Environment) a novel system for collecting and analysing the behaviours of visitors inside a museum environment is presented in this paper. SeSAME is based on a multi-modal deep neural network architecture able to extract anthropometric and appearance features from RGB-D videos acquired in crowded environments. Our approach has been tested on four different temporal modelling methods to aggregate a sequence of image-level features into clip-level features. This paper uses as a benchmark TVPR2, a public dataset of acquired videos with an RGB-D camera in a top-view configuration, in the presence of persistent and temporarily heavy occlusion. Moreover, a dataset specifically collected for this work has been acquired in a real museum environment, which is Palazzo Buonaccorsi, an important historical building in Macerata, in Marche Region in the center of Italy. During the experimental phase, the evaluation metrics show the effectiveness and the suitability of the proposed method.
2022
Elsevier B.V.
Internazionale
File in questo prodotto:
File Dimensione Formato  
Paolanti_SeSAME.pdf

solo utenti autorizzati

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Creative commons
Dimensione 1.09 MB
Formato Adobe PDF
1.09 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11393/305129
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact