Predicting human visuomotor behaviour in a driving task

The sequential deployment of gaze to regions of interest is an integral part of human visual function. Owing to its central importance, decades of research have focused on predicting gaze locations, but there has been relatively little formal attempt to predict the temporal aspects of gaze deployment in natural multi-tasking situations. We approach this problem by decomposing complex visual behaviour into individual task modules that require independent sources of visual information for control, in order to model human gaze deployment on different task-relevant objects. We introduce a softmax barrier model for gaze selection that uses two key elements: a priority parameter that represents task importance per module, and noise estimates that allow modules to represent uncertainty about the state of task-relevant visual information. Comparisons with human gaze data gathered in a virtual driving environment show that the model closely approximates human performance.

[1]  B. Tatler,et al.  The prominence of behavioural biases in eye guidance , 2009 .

[2]  Huaiyu Zhu On Information and Sufficiency , 1997 .

[3]  D. Ballard,et al.  The role of uncertainty and reward on eye movements in a virtual driving task. , 2012, Journal of vision.

[4]  B. Tatler,et al.  Looking and Acting: Vision and eye movements in natural behaviour , 2009 .

[5]  D. Ballard,et al.  Memory Representations in Natural Tasks , 1995, Journal of Cognitive Neuroscience.

[6]  Richard S. Sutton,et al.  Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.

[7]  Wilson S. Geisler,et al.  Optimal eye movement strategies in visual search , 2005, Nature.

[8]  Mary Hayhoe,et al.  A modular reinforcement learning model for human visuomotor behavior in a driving task , 2011 .

[9]  Quoc C. Vuong,et al.  Influence of encoding context on face recognition , 2010 .

[10]  Neil J. Gordon,et al.  A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking , 2002, IEEE Trans. Signal Process..

[11]  Balaraman Ravindran,et al.  Gaze Allocation Analysis for a Visually Guided Manipulation Task , 2012, SAB.

[12]  Matthew H Tong,et al.  SUN: Top-down saliency using natural statistics , 2009, Visual cognition.

[13]  Rajesh P. N. Rao,et al.  Bayesian brain : probabilistic approaches to neural coding , 2006 .

[14]  Mary M Hayhoe,et al.  Trade-offs between gaze and working memory use. , 2007, Journal of experimental psychology. Human perception and performance.

[15]  Javier R. Movellan,et al.  Detecting contingencies: An infomax approach , 2010, Neural Networks.

[16]  Preeti Verghese,et al.  Where to look next? Eye movements reduce local uncertainty. , 2007, Journal of vision.

[17]  Dana H. Ballard,et al.  Modeling embodied visual behaviors , 2007, TAP.

[18]  C. Koch,et al.  Computational modelling of visual attention , 2001, Nature Reviews Neuroscience.

[19]  Michael N. Shadlen,et al.  The Speed and Accuracy of a Simple Perceptual Decision: A Mathematical Primer. , 2007 .

[20]  Ali Borji,et al.  What/Where to Look Next? Modeling Top-Down Visual Attention in Complex Interactive Environments , 2014, IEEE Transactions on Systems, Man, and Cybernetics: Systems.

[21]  M. Land,et al.  The Roles of Vision and Eye Movements in the Control of Activities of Daily Living , 1998, Perception.

[22]  Dana H. Ballard,et al.  A soft barrier model for predicting human visuomotor behavior in a driving task , 2013, CogSci.

[23]  Dana H. Ballard,et al.  Eye Movements for Reward Maximization , 2003, NIPS.

[24]  John K. Tsotsos,et al.  Saliency, attention, and visual search: an information theoretic approach. , 2009, Journal of vision.