In the widespread field of underwater robotics applications, the demand for increasingly intelligent vehicles is leading to the development of Autonomous Underwater Vehi-cles(AUVs) with the capability of understanding and engaging the surrounding environment. Consequently, to push the boundaries of cutting-edge smart AUVs, the automatic recognition of targets is becoming one of the most investigated topics and Deep Learning-based strategies have shown astonishing results. In the context of this work, two different neural network architectures, based on the Single Shot Multibox Detector (SSD) and on the Faster Region-based Convolutional Neural Network (Faster R-CNN), have been trained and validated, respectively, on optical and acoustic datasets. In particular, the models have been trained with the images acquired by FeelHippo AUV during the European Robotics League (ERL) competition, which took place in La Spezia, Italy, in July 2018. The proposed ATR strategy has then been validated with FeelHippo AUV in an on-board post-processing stage by exploiting the images provided by both a 2D Forward Looking Sonar (FLS) as well as an IP camera mounted on-board on the vehicle.

Deep Learning for on-board AUV Automatic Target Recognition for Optical and Acoustic imagery / Zacchini L.; Ridolfi A.; Topini A.; Secciani N.; Bucci A.; Topini E.; Allotta B.. - ELETTRONICO. - 53:(2020), pp. 14589-14594. (Intervento presentato al convegno 21st IFAC World Congress 2020 tenutosi a deu nel 2020) [10.1016/j.ifacol.2020.12.1466].

Deep Learning for on-board AUV Automatic Target Recognition for Optical and Acoustic imagery

Zacchini L.
;
Ridolfi A.;Topini A.;Secciani N.;Bucci A.;Topini E.;Allotta B.
2020

Abstract

In the widespread field of underwater robotics applications, the demand for increasingly intelligent vehicles is leading to the development of Autonomous Underwater Vehi-cles(AUVs) with the capability of understanding and engaging the surrounding environment. Consequently, to push the boundaries of cutting-edge smart AUVs, the automatic recognition of targets is becoming one of the most investigated topics and Deep Learning-based strategies have shown astonishing results. In the context of this work, two different neural network architectures, based on the Single Shot Multibox Detector (SSD) and on the Faster Region-based Convolutional Neural Network (Faster R-CNN), have been trained and validated, respectively, on optical and acoustic datasets. In particular, the models have been trained with the images acquired by FeelHippo AUV during the European Robotics League (ERL) competition, which took place in La Spezia, Italy, in July 2018. The proposed ATR strategy has then been validated with FeelHippo AUV in an on-board post-processing stage by exploiting the images provided by both a 2D Forward Looking Sonar (FLS) as well as an IP camera mounted on-board on the vehicle.
2020
IFAC-PapersOnLine
21st IFAC World Congress 2020
deu
2020
Goal 9: Industry, Innovation, and Infrastructure
Zacchini L.; Ridolfi A.; Topini A.; Secciani N.; Bucci A.; Topini E.; Allotta B.
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S2405896320318784-main.pdf

accesso aperto

Descrizione: Articolo principale
Tipologia: Pdf editoriale (Version of record)
Licenza: Open Access
Dimensione 602.19 kB
Formato Adobe PDF
602.19 kB Adobe PDF

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1257760
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 15
  • ???jsp.display-item.citation.isi??? 9
social impact