Monitoring the health of Po (Posidonia oceanica) meadows is essential for preserving marine biodiversity and ensuring sustainable coastal management. This paper presents an autonomous underwater monitoring pipeline that integrates computer vision and navigation technologies onboard the FeelHippo AUV (Autonomous Underwater Vehicle). The proposed system leverages a DeepLabV3+ CNN (Convolutional Neural Network) for semantic segmentation of underwater images, enabling autonomous detection of Po. 3D maps of the sea bottom are obtained through stereo vision, supported by a ZED2 stereo camera encased in a flat-port housing. By fusing segmentation outputs with spatial data from onboard navigation systems and the stereo camera images, the approach yields geo-referenced point clouds that differentiate seagrass from non-seagrass areas. Preliminary field experiments conducted in the Mediterranean Sea validate the method's feasibility and highlight its potential for scalable, efficient, and autonomous marine habitat mapping. Future improvements aim to refine localization accuracy and further hone robustness of the proposed workflow.
Autonomous Stereo Camera Based Semantic Mapping of Underwater Posidonia Oceanica / Liverani, Gherardo; Bucci, Alessandro; Parati, Filippo; Topini, Alberto; Magi, Adele; Ridolfi, Alessandro. - STAMPA. - (2025), pp. 327-331. ( IEEE International Workshop on Metrology for the Sea; Learning to Measure Sea Health Parameters, MetroSea 2025 Genova, Italy October 8-10, 2025) [10.1109/metrosea66681.2025.11245758].
Autonomous Stereo Camera Based Semantic Mapping of Underwater Posidonia Oceanica
Liverani, Gherardo
;Bucci, Alessandro;Parati, Filippo;Topini, Alberto;Magi, Adele;Ridolfi, Alessandro
2025
Abstract
Monitoring the health of Po (Posidonia oceanica) meadows is essential for preserving marine biodiversity and ensuring sustainable coastal management. This paper presents an autonomous underwater monitoring pipeline that integrates computer vision and navigation technologies onboard the FeelHippo AUV (Autonomous Underwater Vehicle). The proposed system leverages a DeepLabV3+ CNN (Convolutional Neural Network) for semantic segmentation of underwater images, enabling autonomous detection of Po. 3D maps of the sea bottom are obtained through stereo vision, supported by a ZED2 stereo camera encased in a flat-port housing. By fusing segmentation outputs with spatial data from onboard navigation systems and the stereo camera images, the approach yields geo-referenced point clouds that differentiate seagrass from non-seagrass areas. Preliminary field experiments conducted in the Mediterranean Sea validate the method's feasibility and highlight its potential for scalable, efficient, and autonomous marine habitat mapping. Future improvements aim to refine localization accuracy and further hone robustness of the proposed workflow.| File | Dimensione | Formato | |
|---|---|---|---|
|
m63504-liverani final.pdf
Accesso chiuso
Descrizione: Articolo principale
Tipologia:
Pdf editoriale (Version of record)
Licenza:
Tutti i diritti riservati
Dimensione
5.65 MB
Formato
Adobe PDF
|
5.65 MB | Adobe PDF | Richiedi una copia |
I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



