Predicting Visual Fixations.

benchmarking eye movements fixations information theory model comparison saliency taxonomy transfer learning unifying framework

Journal

Annual review of vision science
ISSN: 2374-4650
Titre abrégé: Annu Rev Vis Sci
Pays: United States
ID NLM: 101660822

Informations de publication

Date de publication:
15 09 2023
Historique:
medline: 18 9 2023
pubmed: 8 7 2023
entrez: 7 7 2023
Statut: ppublish

Résumé

As we navigate and behave in the world, we are constantly deciding, a few times per second, where to look next. The outcomes of these decisions in response to visual input are comparatively easy to measure as trajectories of eye movements, offering insight into many unconscious and conscious visual and cognitive processes. In this article, we review recent advances in predicting where we look. We focus on evaluating and comparing models: How can we consistently measure how well models predict eye movements, and how can we judge the contribution of different mechanisms? Probabilistic models facilitate a unified approach to fixation prediction that allows us to use explainable information explained to compare different models across different settings, such as static and video saliency, as well as scanpath prediction. We review how the large variety of saliency maps and scanpath models can be translated into this unifying framework, how much different factors contribute, and how we can select the most informative examples for model comparison. We conclude that the universal scale of information gain offers a powerful tool for the inspection of candidate mechanisms and experimental design that helps us understand the continual decision-making process that determines where we look.

Identifiants

pubmed: 37419107
doi: 10.1146/annurev-vision-120822-072528
doi:

Types de publication

Journal Article Review Research Support, Non-U.S. Gov't

Langues

eng

Sous-ensembles de citation

IM

Pagination

269-291

Auteurs

Matthias Kümmerer (M)

Tübingen AI Center, University of Tübingen, Tübingen, Germany; email: matthias.kuemmerer@bethgelab.org, matthias@bethgelab.org.

Matthias Bethge (M)

Tübingen AI Center, University of Tübingen, Tübingen, Germany; email: matthias.kuemmerer@bethgelab.org, matthias@bethgelab.org.

Articles similaires

Sub-cone visual resolution by active, adaptive sampling in the human foveola.

Jenny L Witten, Veronika Lukyanova, Wolf M Harmening
1.00
Humans Fovea Centralis Retinal Cone Photoreceptor Cells Visual Acuity Adult

Multi-stage gaze-controlled virtual keyboard using eye tracking.

Verdzekov Emile Tatinyuy, Auguste Vigny Noumsi Woguia, Joseph Mvogo Ngono et al.
1.00
Humans Eye-Tracking Technology Fixation, Ocular Female Male
Humans Reading Child Female Male
1.00
Humans Infant Cognition Child Development Fixation, Ocular

Classifications MeSH