Computational models of visual attention are at the crossroad of disciplines like cognitive science, computational neuroscience, and computer vision. When eye-tracking devices are not a viable option, models of human attention can be used to predict fixations. Not only humans are correlated in terms of the locations they ﬁxate, but they also agree somewhat in the order of their ﬁxations. In some applications (e.g. advertising) it is desiderable to predict the gaze shifts for multiple steps ahead. In this project we propose models of visual attention scanpath based on the principle that there are foundational laws that drive the emergence of visual attention. Models are evaluated in tasks of saliency and scanpath prediction.
|On static scenes||On dynamic scenes|
- NIPS 2017 | Variational Laws of Visual Attention for Dynamic Scenes
- PBR 2018 | A unified computational framework for visual attention dynamics
- ArXiv 2018 | FixaTons: A Collection of Human Fixations Datasets and Metrics for Scanpath Similarity (MIT saliency benchmarks)
- ArXiv 2018 | Visual Attention Driven by Convolutional Features
- TPAMI 2019 | Gravitational Laws of Focus of Attention