Road scene analysis methods usually classify driving events after they happen, providing valuable video evidence and coaching opportunities but not in-cabin alerts that could help prevent accidents. In-device models, on the other hand, try to identify dangerous events like frontal collision or lane departure beforehand and warn the driver before an accident happens. However, such systems typically employ object detectors plus hand-engineered rules, which fail to cover all possible corner cases. We propose a novel self-supervised end-to-end training approach. Our solution does not require object detection and is able to predict common dangerous situations like incoming forward collision or short following distance as well as corner cases like unusual driving behavior or potential collision with animals. By assuming a non-decreasing danger level in the instants just before the crash, we design a novel loss function to train our model without the need for hand-engineered rules. As a matter of fact, the proposed loss function is simple and generic enough that could be applied to any task with know start and end states. Our approach enables the usage of larger datasets for training by drastically reducing labeling effort while still maintaining competitive performance in the Car Crash Dataset (CCD) and Dashcam Accident Dataset (DAD), when compared to the literature. Finally, we show how our method generalizes to a variety of crash dynamics from the Dataset of Traffic Anomaly (DoTA), while still maintaining a contained inference time thanks to its streamlined approach.

Self-supervised Road Accident Anticipation with Non-decreasing Danger / Pjetri, Aurel; Abbondandolo, Davide; de Andrade, Douglas Coimbra; Caprasecca, Stefano; Sambo, Francesco; Bagdanov, Andrew David. - STAMPA. - (2025), pp. 65-79. ( European Conference on Computer Vision Workshops) [10.1007/978-3-031-91767-7_5].

Self-supervised Road Accident Anticipation with Non-decreasing Danger

Pjetri, Aurel;Bagdanov, Andrew David
2025

Abstract

Road scene analysis methods usually classify driving events after they happen, providing valuable video evidence and coaching opportunities but not in-cabin alerts that could help prevent accidents. In-device models, on the other hand, try to identify dangerous events like frontal collision or lane departure beforehand and warn the driver before an accident happens. However, such systems typically employ object detectors plus hand-engineered rules, which fail to cover all possible corner cases. We propose a novel self-supervised end-to-end training approach. Our solution does not require object detection and is able to predict common dangerous situations like incoming forward collision or short following distance as well as corner cases like unusual driving behavior or potential collision with animals. By assuming a non-decreasing danger level in the instants just before the crash, we design a novel loss function to train our model without the need for hand-engineered rules. As a matter of fact, the proposed loss function is simple and generic enough that could be applied to any task with know start and end states. Our approach enables the usage of larger datasets for training by drastically reducing labeling effort while still maintaining competitive performance in the Car Crash Dataset (CCD) and Dashcam Accident Dataset (DAD), when compared to the literature. Finally, we show how our method generalizes to a variety of crash dynamics from the Dataset of Traffic Anomaly (DoTA), while still maintaining a contained inference time thanks to its streamlined approach.
2025
Proceedings of the 2024 IEEE European Conference on Computer Vision Workshops
European Conference on Computer Vision Workshops
Pjetri, Aurel; Abbondandolo, Davide; de Andrade, Douglas Coimbra; Caprasecca, Stefano; Sambo, Francesco; Bagdanov, Andrew David
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificatore per citare o creare un link a questa risorsa: https://hdl.handle.net/2158/1425482
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact