Dissemin is shutting down on January 1st, 2025

Published in

Brill Academic Publishers, Timing and Time Perception, 1(9), p. 1-38, 2020

DOI: 10.1163/22134468-bja10013

Links

Tools

Export citation

Search in Google Scholar

Auditory and Visual Durations Load a Unitary Working-Memory Resource

Journal article published in 2020 by Kielan Yarrow, Carine Samba, Carmen Kohl, Derek H. Arnold ORCID
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Abstract Items in working memory are typically defined by various attributes, such as colour (for visual objects) and pitch (for auditory objects). The attribute of duration can be signalled by multiple modalities, but has received relatively little attention from a working-memory perspective. While the existence of specialist stores (e.g., the phonological loop and visuospatial sketchpad) is often asserted in the wider working-memory literature, the interval-timing literature has more often implied a unitary (amodal) store. Here we combine two modelling frameworks to probe the basis of working memory for duration; a Bayesian-observer framework, previously used to explain behaviour in duration-reproduction tasks, and mixture models, describing distributions of continuous reports about items in working memory. We modelled different storage mechanisms, such as a limited number of fixed-resolution slots or a resource spread between items at a cost to resolution, in order to ask whether items from different sensory modalities are maintained in separate, independent stores. We initially analysed data from 32 participants, who memorised between one and eight items before reproducing the duration of a randomly selected target. In separate blocks, items could be all visual, all auditory, or an alternating mixture of both. A small control experiment included a further condition with precuing of target modality. Certain kinds of slot models, resource models, and combination models incorporating both mechanisms could account for the data. However, looking across all plausible models, the decline in performance with increasing memory load was most consistent with a single store for event durations, regardless of stimulus modality.