Dissemin is shutting down on January 1st, 2025

Published in

Wiley, Cognitive Science: A Multidisciplinary Journal, 1(23), p. 53-82

DOI: 10.1016/s0364-0213(99)80052-4

Wiley, Cognitive Science: A Multidisciplinary Journal, 1(23), p. 53-82

DOI: 10.1207/s15516709cog2301_3

Workshops in Computing, p. 19-33

DOI: 10.1007/978-1-4471-3579-1_2

Links

Tools

Export citation

Search in Google Scholar

Mapping across Domains Without Feedback: A Neural Network Model of Transfer of Implicit Knowledge

Journal article published in 1995 by Gerry T. M. Altmann, Zoltán Dienes ORCID, Shi-Ji Gao
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Red circle
Preprint: archiving forbidden
Red circle
Postprint: archiving forbidden
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

This paper shows how a neural network can model the way people who have acquired knowledge of an artificial grammar in one perceptual domain (e.g., sequences of tones differing in pitch) can apply the knowledge to a quite different perceptual domain (e.g., sequences of letters). It is shown that a version of the Simple Recurrent Network (SRN) can transfer its knowledge of artificial grammars across domains without feedback. The performance of the model is sensitive to at least some of the same variables that affect subjects' performance—for example, the model is responsive to both the grammaticality of test sequences and their similarity to training sequences, to the cover task used during training, and to whether training is on bigrams or larger sequences.