In recent years, seabed inspection has become one of the most sought-after tasks for Autonomous Underwater Vehicles (AUVs) in various applications. Forward-Looking Sonars (FLS) are commonly favored over optical cameras, which are not-negligibly affected by environmental conditions, to carry out inspection and exploration tasks. Indeed, sonars are not influenced by illumination conditions and can provide high-range data at the cost of lower-resolution images compared to optical sensors. However, due to the lack of features, sonar images are often hard to interpret by using conventional image processing techniques, leading to the necessity of human operators analyzing thousands of images acquired during the AUV mission to identify the potential targets of interest. This paper reports the development of an Automatic Target Recognition (ATR) methodology to identify and localize potential targets in FLS imagery, which could help human operators in this challenging, demanding task. The Mask R-CNN (Region-based Convolutional Neural Network), which constitutes the core of the proposed solution, has been trained with a dataset acquired in May 2019 at the Naval Support and Experimentation Centre (Centro di Supporto e Sperimentazione Navale-CSSN) basin, in La Spezia (Italy). The ATR strategy was then successfully validated online in the same site in October 2019, where the targets were replaced and relocated on the seabed.

Forward-Looking Sonar CNN-based Automatic Target Recognition: An experimental campaign with FeelHippo AUV / Zacchini L.; Franchi M.; Manzari V.; Pagliai M.; Secciani N.; Topini A.; Stifani M.; Ridolfi A.. - ELETTRONICO. - (2020), pp. 1-6. (Intervento presentato al convegno 2020 IEEE/OES Autonomous Underwater Vehicles Symposium, AUV 2020 tenutosi a St. Johns, NL, Canada nel 2020) [10.1109/AUV50043.2020.9267902].

Forward-Looking Sonar CNN-based Automatic Target Recognition: An experimental campaign with FeelHippo AUV

Zacchini L.;Franchi M.;Pagliai M.;Secciani N.;Topini A.;Ridolfi A.
2020

Abstract

In recent years, seabed inspection has become one of the most sought-after tasks for Autonomous Underwater Vehicles (AUVs) in various applications. Forward-Looking Sonars (FLS) are commonly favored over optical cameras, which are not-negligibly affected by environmental conditions, to carry out inspection and exploration tasks. Indeed, sonars are not influenced by illumination conditions and can provide high-range data at the cost of lower-resolution images compared to optical sensors. However, due to the lack of features, sonar images are often hard to interpret by using conventional image processing techniques, leading to the necessity of human operators analyzing thousands of images acquired during the AUV mission to identify the potential targets of interest. This paper reports the development of an Automatic Target Recognition (ATR) methodology to identify and localize potential targets in FLS imagery, which could help human operators in this challenging, demanding task. The Mask R-CNN (Region-based Convolutional Neural Network), which constitutes the core of the proposed solution, has been trained with a dataset acquired in May 2019 at the Naval Support and Experimentation Centre (Centro di Supporto e Sperimentazione Navale-CSSN) basin, in La Spezia (Italy). The ATR strategy was then successfully validated online in the same site in October 2019, where the targets were replaced and relocated on the seabed.
2020
2020 IEEE/OES Autonomous Underwater Vehicles Symposium, AUV 2020
2020 IEEE/OES Autonomous Underwater Vehicles Symposium, AUV 2020
St. Johns, NL, Canada
2020
Zacchini L.; Franchi M.; Manzari V.; Pagliai M.; Secciani N.; Topini A.; Stifani M.; Ridolfi A.
File in questo prodotto:
File Dimensione Formato  
09267902_DEF_DEF.pdf

Accesso chiuso

Descrizione: Articolo principale
Tipologia: Pdf editoriale (Version of record)
Licenza: Tutti i diritti riservati
Dimensione 2.31 MB
Formato Adobe PDF
2.31 MB Adobe PDF   Richiedi una copia

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1225542
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 25
  • ???jsp.display-item.citation.isi??? 2
social impact