Computational models of visual attention are at the crossroad of disciplines like cognitive science, computational neuroscience, and computer vision. This paper proposes an approach that is based on the principle that there are foundational laws that drive the emergence of visual attention. We devise variational laws of the eye-movement that rely on a generalized view of the Least Action Principle in physics. The potential energy captures details as well as peripheral visual features, while the kinetic energy corresponds with the classic interpretation in analytic mechanics. In addition, the Lagrangian contains a brightness invariance term, which characterizes significantly the scanpath trajectories. We obtain differential equations of visual attention as the stationary point of the generalized action, and propose an algorithm to estimate the model parameters. Finally, we report experimental results to validate the model in tasks of saliency detection.
Variational Laws of Visual Attention for Dynamic Scenes / Zanca, Dario; Marco, Gori. - STAMPA. - (2017), pp. 3824-3833. (Intervento presentato al convegno NIPS 2017).
Variational Laws of Visual Attention for Dynamic Scenes
ZANCA, DARIO
;
2017
Abstract
Computational models of visual attention are at the crossroad of disciplines like cognitive science, computational neuroscience, and computer vision. This paper proposes an approach that is based on the principle that there are foundational laws that drive the emergence of visual attention. We devise variational laws of the eye-movement that rely on a generalized view of the Least Action Principle in physics. The potential energy captures details as well as peripheral visual features, while the kinetic energy corresponds with the classic interpretation in analytic mechanics. In addition, the Lagrangian contains a brightness invariance term, which characterizes significantly the scanpath trajectories. We obtain differential equations of visual attention as the stationary point of the generalized action, and propose an algorithm to estimate the model parameters. Finally, we report experimental results to validate the model in tasks of saliency detection.I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.