The interest in high-resolution semantic 3D models of historical buildings continuously increased during the last decade, thanks to their utility in protection, conservation and restoration of cultural heritage sites. The current generation of surveying tools allows the quick collection of large and detailed amount of data: such data ensure accurate spatial representations of the buildings, but their employment in the creation of informative semantic 3D models is still a challenging task, and it currently still requires manual time-consuming intervention by expert operators. Hence, increasing the level of automation, for instance developing an automatic semantic segmentation procedure enabling machine scene understanding and comprehension, can represent a dramatic improvement in the overall processing procedure. In accordance with this observation, this paper aims at presenting a new workflow for the automatic semantic segmentation of 3D point clouds based on a multi-view approach. Two steps compose this workflow: first, neural network-based semantic segmentation is performed on building images. Then, image labelling is back-projected, through the use of masked images, on the 3D space by exploiting photogrammetry and dense image matching principles. The obtained results are quite promising, with a good performance in the image segmentation, and a remarkable potential in the 3D reconstruction procedure.

An Image-based Deep Learning Workflow for 3D Heritage Point Cloud Semantic Segmentation / Eugenio Pellis, Arnadi Murtiyoso, Andrea Masiero, Grazia Tucci, Michele Betti, Pierre Grussenmeyer. - ELETTRONICO. - XLVI-2/W1-2022:(2022), pp. 0-0. (Intervento presentato al convegno 9th International Workshop 3D-ARCH “3D Virtual Reconstruction and Visualization of Complex Architectures” tenutosi a Mantova) [10.5194/isprs-archives-XLVI-2-W1-2022-429-2022].

An Image-based Deep Learning Workflow for 3D Heritage Point Cloud Semantic Segmentation

Eugenio Pellis
;
Andrea Masiero;Grazia Tucci;Michele Betti;
2022

Abstract

The interest in high-resolution semantic 3D models of historical buildings continuously increased during the last decade, thanks to their utility in protection, conservation and restoration of cultural heritage sites. The current generation of surveying tools allows the quick collection of large and detailed amount of data: such data ensure accurate spatial representations of the buildings, but their employment in the creation of informative semantic 3D models is still a challenging task, and it currently still requires manual time-consuming intervention by expert operators. Hence, increasing the level of automation, for instance developing an automatic semantic segmentation procedure enabling machine scene understanding and comprehension, can represent a dramatic improvement in the overall processing procedure. In accordance with this observation, this paper aims at presenting a new workflow for the automatic semantic segmentation of 3D point clouds based on a multi-view approach. Two steps compose this workflow: first, neural network-based semantic segmentation is performed on building images. Then, image labelling is back-projected, through the use of masked images, on the 3D space by exploiting photogrammetry and dense image matching principles. The obtained results are quite promising, with a good performance in the image segmentation, and a remarkable potential in the 3D reconstruction procedure.
2022
The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences
9th International Workshop 3D-ARCH “3D Virtual Reconstruction and Visualization of Complex Architectures”
Mantova
Eugenio Pellis, Arnadi Murtiyoso, Andrea Masiero, Grazia Tucci, Michele Betti, Pierre Grussenmeyer
File in questo prodotto:
File Dimensione Formato  
An Image-based Deep Learning Workflow for 3D Heritage Semantic Segmentation.pdf

accesso aperto

Tipologia: Pdf editoriale (Version of record)
Licenza: Open Access
Dimensione 1.4 MB
Formato Adobe PDF
1.4 MB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1261891
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 14
  • ???jsp.display-item.citation.isi??? ND
social impact