Computer-aided digitization has become a powerful tool for enhancing documentation and preservation of cultural heritage as demonstrated by the recent emergence of Heritage Building Information Modeling (H-BIM). However, the reconstruction of as-built models still poses significant challenges, particularly in managing the large-scale data resulting from acquisition campaigns. To address these issues, this thesis proposes a novel point cloud semantic segmentation procedure based on a deep learning multiview approach. Firstly, the proposed approach employs a deep convolutional neural network to extract semantic information from multiple images resulting from a photogrammetric survey. Subsequently, the extracted semantic information is projected onto the 3D related photogrammetric point cloud by means of the intrinsic and extrinsic camera parameters. The method is validated and assessed through a series of tests using an image-point based dataset composed by five heritage scenes, specifically designed to train and test the proposed procedure. Overall, the results are still at an early stage in terms of predicting unseen scenarios, but the procedure demonstrate promising advancements in terms of performance and reliability if it is properly trained through datasets with a greater generalization capability.

A multiview approach for the semantic segmentation of heritage building point clouds / Eugenio Pellis. - (2023).

A multiview approach for the semantic segmentation of heritage building point clouds

Eugenio Pellis
2023

Abstract

Computer-aided digitization has become a powerful tool for enhancing documentation and preservation of cultural heritage as demonstrated by the recent emergence of Heritage Building Information Modeling (H-BIM). However, the reconstruction of as-built models still poses significant challenges, particularly in managing the large-scale data resulting from acquisition campaigns. To address these issues, this thesis proposes a novel point cloud semantic segmentation procedure based on a deep learning multiview approach. Firstly, the proposed approach employs a deep convolutional neural network to extract semantic information from multiple images resulting from a photogrammetric survey. Subsequently, the extracted semantic information is projected onto the 3D related photogrammetric point cloud by means of the intrinsic and extrinsic camera parameters. The method is validated and assessed through a series of tests using an image-point based dataset composed by five heritage scenes, specifically designed to train and test the proposed procedure. Overall, the results are still at an early stage in terms of predicting unseen scenarios, but the procedure demonstrate promising advancements in terms of performance and reliability if it is properly trained through datasets with a greater generalization capability.
2023
Grazia Tucci, Andrea Masiero, Michele Betti, Pierre Grussenmeyer
ITALIA
Eugenio Pellis
File in questo prodotto:
File Dimensione Formato  
PELLIS_EUGENIO_A multiview approach_2023.pdf

accesso aperto

Tipologia: Pdf editoriale (Version of record)
Licenza: Open Access
Dimensione 12.65 MB
Formato Adobe PDF
12.65 MB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1346101
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact