Project A3. Learning and adaptation of spatial attention templates in visual search

Applicant Professor Dr. Thomas Geyer
Subject Area General, Cognitive and Mathematical Psychology
Term from 2015 to 2025
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 263727500
 

Project Description

In everyday scenes, searched-for targets do not appear in isolation, but are embedded within configurations of non-target or distractor items. If the position of the target relative to the distractors is invariant, such spatial contingencies are implicitly learned and come to guide visual scanning (contextual-cueing effect; Chun & Jiang, 1998; Geyer et al., 2010b). In terms of predictive coding models, implicit configural learning can be conceptualized as involving the gradual acquisition of associations between input and cause (Friston, 2010; Conci et al., 2012). That is: search-guiding predictions (about the location of a target) are propagated top-down from contextual memory to influence bottom-up search processes. However, the effectiveness of contextual predictions depends heavily on the consistency between bottom-up perceptual input and context memory: following configural learning, re-locating targets to an unexpected location within an unchanged distractor context completely abolishes contextual cueing, and the gains deriving from the invariant context recover only very slowly with increasing exposure to the changed displays (Zellin et al. 2014). The present proposal will employ behavioral and neuroscience methods to investigate the top-down-bottom-up balance of contextual learning in visual search.
DFG Programme Research Units
Subproject of FOR 2293:  Active Perception
Co-Investigator Privatdozent Dr. Markus Conci