Wiley, Cognitive Science: A Multidisciplinary Journal, 1(23), p. 53-82
DOI: 10.1016/s0364-0213(99)80052-4
Wiley, Cognitive Science: A Multidisciplinary Journal, 1(23), p. 53-82
DOI: 10.1207/s15516709cog2301_3
Workshops in Computing, p. 19-33
DOI: 10.1007/978-1-4471-3579-1_2
Full text: Download
This paper shows how a neural network can model the way people who have acquired knowledge of an artificial grammar in one perceptual domain (e.g., sequences of tones differing in pitch) can apply the knowledge to a quite different perceptual domain (e.g., sequences of letters). It is shown that a version of the Simple Recurrent Network (SRN) can transfer its knowledge of artificial grammars across domains without feedback. The performance of the model is sensitive to at least some of the same variables that affect subjects' performance—for example, the model is responsive to both the grammaticality of test sequences and their similarity to training sequences, to the cover task used during training, and to whether training is on bigrams or larger sequences.