Monocular depth estimation is a critical task for autonomous driving and many other computer vision applications. While significant progress has been made in this field, the effects of viewpoint shifts on depth estimation models remain largely underexplored. This paper introduces a novel dataset and evaluation methodology to quantify the impact of different camera positions and orientations on monocular depth estimation performance. We propose a ground truth strategy based on homography estimation and object detection, eliminating the need for expensive LIDAR sensors. We collect a diverse dataset of road scenes from multiple viewpoints and use it to assess the robustness of a modern depth estimation model to geometric shifts. After assessing the validity of our strategy on a public dataset, we provide valuable insights into the limitations of current models and highlight the importance of considering viewpoint variations in real-world applications.
ViewpointDepth: A New Dataset for Monocular Depth Estimation Under Viewpoint Shifts / Pjetri, Aurel; Caprasecca, Stefano; Taccari, Leonardo; Simoncini, Matteo; Monteagudo, Henrique Piñeiro; Walter, Wallace; de Andrade, Douglas Coimbra; Sambo, Francesco; Bagdanov, Andrew David. - ELETTRONICO. - (2025), pp. 406-411. ( 36th IEEE Intelligent Vehicles Symposium, IV 2025 Grand Hotel Italia, Str. Vasile Conta, nr.2, rou 2025) [10.1109/iv64158.2025.11097590].
ViewpointDepth: A New Dataset for Monocular Depth Estimation Under Viewpoint Shifts
Pjetri, Aurel
;Bagdanov, Andrew David
2025
Abstract
Monocular depth estimation is a critical task for autonomous driving and many other computer vision applications. While significant progress has been made in this field, the effects of viewpoint shifts on depth estimation models remain largely underexplored. This paper introduces a novel dataset and evaluation methodology to quantify the impact of different camera positions and orientations on monocular depth estimation performance. We propose a ground truth strategy based on homography estimation and object detection, eliminating the need for expensive LIDAR sensors. We collect a diverse dataset of road scenes from multiple viewpoints and use it to assess the robustness of a modern depth estimation model to geometric shifts. After assessing the validity of our strategy on a public dataset, we provide valuable insights into the limitations of current models and highlight the importance of considering viewpoint variations in real-world applications.I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.



