Visual Simultaneous Localization and Mapping (VSLAM) and Visual Odometry (VO) are fundamental problems to be properly tackled for enabling autonomous and effective movements of vehicles/robots supported by vision-based positioning systems. This study presents a publicly shared dataset for SLAM investigations: a dataset collected at the Yildiz Technical University (YTU) in an outdoor area by an acquisition system mounted on a terrestrial vehicle. The acquisition system includes two cameras, an inertial measurement unit, and two GPS receivers. All sensors have been calibrated and synchronized. To prove the effectiveness of the introduced dataset, this study also applies Visual Inertial Odometry (VIO) on the KITTI dataset. Also, this study proposes a new recurrent neural network-based VIO rather than just introducing a new dataset. In addition, the effectiveness of this proposed method is proven by comparing it with the state-of-the-arts ORB-SLAM2 and OKVIS methods. The experimental results show that the YTU dataset is robust enough to be used for benchmarking studies and the proposed deep learning-based VIO is more successful than the other two traditional methods.
The YTU dataset and recurrent neural network based visual-inertial odometry / Gurturk M.; Yusefi A.; Aslan M.F.; Soycan M.; Durdu A.; Masiero A.. - In: MEASUREMENT. - ISSN 0263-2241. - ELETTRONICO. - 184:(2021), pp. 0-0. [10.1016/j.measurement.2021.109878]