Association for Research in Vision and Ophthalmology, Journal of Vision, 7(15), p. 2
DOI: 10.1167/15.7.2
Full text: Download
This is the author accepted manuscript. The final version is available from the Association for Research in Vision and Ophthalmology at http://jov.arvojournals.org/Article.aspx?articleid=2296935. ; In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. ; This work was supported by a Wellcome Trust Senior Research Fellowship to AEW (095183/Z/10/Z) and grants to ZK from the Biotechnology and Biological Sciences Research Council (H012508), a Leverhulme Trust Research Fellowship (RF-2011-378) and the People Programme (Marie Curie Actions) of the European Union?s Seventh Framework Programme FP7/2007-2013/ under REA grant agreement no. PITN-GA-2011-290011.