Heritage in the domain of dance amounts to a vast set of multimodal information, representing both tangible and intangible materials. Modern systems leverage different Artificial Intelligence (AI)-driven paradigms to enhance the preservation, accessibility, quantitative data analysis, and valorization of dance heritage. One particular outcome of this application is the generation of linked semantic information among multimodal data regarding a particular dance entity, which is, however, hard to interpret and visualize. For this reason, Extended Reality and Immersive paradigms could be employed to visualize it through immersive approaches, also easing its manipulation. However, involving tangible material, there is still a gap on how to directly project objects, entities, and processes that were captured in flat 2D pictures into the 3D realm. Since manual 3D modeling are labor-intensive, we here introduce and discuss a Large Reconstruction Driven Framework for accelerating digitazion of visual material, also integrating discriminative AI approaches to generate 3D models starting from 2D pictures, through a human-in-the-loop (HITL) and controllable approach. To validate the approach, we applied it to a specific case study, linked to the artistic legacy of the dancer and choreographer Rudolf Nureyev, to digitize its multimodal materials. The implications of the proposed framework could impact various creative industries and cultural heritage preservation efforts.
A Large Reconstruction Model Driven Approach to Support Humans in Digitization of Dance Visual Material into 3D environments
Stacchio, Lorenzo;
2025-01-01
Abstract
Heritage in the domain of dance amounts to a vast set of multimodal information, representing both tangible and intangible materials. Modern systems leverage different Artificial Intelligence (AI)-driven paradigms to enhance the preservation, accessibility, quantitative data analysis, and valorization of dance heritage. One particular outcome of this application is the generation of linked semantic information among multimodal data regarding a particular dance entity, which is, however, hard to interpret and visualize. For this reason, Extended Reality and Immersive paradigms could be employed to visualize it through immersive approaches, also easing its manipulation. However, involving tangible material, there is still a gap on how to directly project objects, entities, and processes that were captured in flat 2D pictures into the 3D realm. Since manual 3D modeling are labor-intensive, we here introduce and discuss a Large Reconstruction Driven Framework for accelerating digitazion of visual material, also integrating discriminative AI approaches to generate 3D models starting from 2D pictures, through a human-in-the-loop (HITL) and controllable approach. To validate the approach, we applied it to a specific case study, linked to the artistic legacy of the dancer and choreographer Rudolf Nureyev, to digitize its multimodal materials. The implications of the proposed framework could impact various creative industries and cultural heritage preservation efforts.| File | Dimensione | Formato | |
|---|---|---|---|
|
RE_DANXE_ECAI.pdf
accesso aperto
Licenza:
Tutti i diritti riservati
Dimensione
11.24 MB
Formato
Adobe PDF
|
11.24 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


