Published in

Wiley, Cognitive Science: A Multidisciplinary Journal, 1(16), p. 41-79, 1992

DOI: 10.1207/s15516709cog1601_2

Wiley, Cognitive Science: A Multidisciplinary Journal, 1(16), p. 41-79

DOI: 10.1016/0364-0213(92)90017-o

Links

Tools

Export citation

Search in Google Scholar

Connectionist and Memory-Array Models of Artificial Grammar Learning

Journal article published in 1992 by Zoltan Dienes ORCID
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Red circle
Preprint: archiving forbidden
Red circle
Postprint: archiving forbidden
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Subjects exposed to strings of letters generated by a finite state grammar can later classify grammatical and nongrammatical test strings, even though they cannot adequately say what the rules of the grammar are (e.g., Reber, 1989). The MINERVA 2 (Hintzman, 1986) and Medin and Schaffer (1978) memory-array models and a number of connectionist outoassociator models are tested against experimental data by deriving mainly parameter-free predictions from the models of the rank order of classification difficulty of test strings. The importance of different assumptions regarding the coding of features (How should the absence of a feature be coded? Should single letters or digrams be coded?), the learning rule used (Hebb rule vs. delta rule), and the connectivity (Should features be predicted only by previous features in the string, or by all features simultaneously?) is investigated by determining the performance of the models with and without each assumption. Only one class of connectionist model (the simultaneous delta rule) passes all the tests. It is shown that this class of model can be regarded by abstracting a set of representative but incomplete rules of the grammar.