Published in

Association for Research in Vision and Ophthalmology, Journal of Vision, 4(15), p. 13

DOI: 10.1167/15.4.13

Association for Research in Vision and Ophthalmology, Journal of Vision, 4(15), p. 13

DOI: 10.1167/5.4.13

Links

Tools

Export citation

Search in Google Scholar

Obligatory and adaptive averaging in visual short term memory

Journal article published in 2015 by Chad Dubé, Robert Sekuler
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Red circle
Postprint: archiving forbidden
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

Visual memory can draw upon averaged perceptual representations, a dependence that could be both adaptive and obligatory. In support of this idea, we review a wide range of evidence, including findings from our own lab. This evidence shows that time-and space-averaged memory representations influence detection and recognition responses, and do so without instruction to compute or report an average. Some of the work reviewed exploits fine-grained measures of retrieval from visual short term memory (VSTM) to closely track the influence of stored averages on recall and recognition of briefly-presented visual textures. Results show that reliance on perceptual averages is greatest when memory resources are taxed, or when subjects are uncertain about the fidelity of their memory representation. We relate these findings to models of how summary statistics impact VSTM, and discuss a neural signature for contexts in which perceptual averaging exerts maximal influence. In its broadest sense, a representation may be defined as anything that stands for something other than itself (Frisby & Stone, 2010). For example, the word " quinoa " is a representation of the grain quinoa, and if you are staring intently at some quinoa, the activations of neurons in cortical Area V1 would be yet another representation of quinoa. In fact, the visual system exploits multiple representations, which vary in the fidelity with which each captures the details of the stimulus being represented. At one extreme, such representations may be detailed and precise, faithfully capturing a great many of a stimulus' features; at the other extreme, they may be likened to a broad brush, quick sketch of the stimulus, which omits most details. It is easy to imagine the value of discarding some sensory information in favor of a more compact, less detailed representation. For example, the spatial or temporal properties of some stimulus could limit the information that can be encoded, thereby forcing the system to fall back on a space-or time-averaged summary of the incoming stimulus. McDermott, Schemitsch, and Simoncelli (2013) made this point with particular clarity in a study of au