Dissemin is shutting down on January 1st, 2025

Published in

Annual Reviews, Annual Review of Neuroscience, 1(41), p. 233-253, 2018

DOI: 10.1146/annurev-neuro-080317-061948

Links

Tools

Export citation

Search in Google Scholar

Computational Principles of Supervised Learning in the Cerebellum

Journal article published in 2018 by Jennifer L. Raymond ORCID, Javier F. Medina
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Red circle
Postprint: archiving forbidden
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Supervised learning plays a key role in the operation of many biological and artificial neural networks. Analysis of the computations underlying supervised learning is facilitated by the relatively simple and uniform architecture of the cerebellum, a brain area that supports numerous motor, sensory, and cognitive functions. We highlight recent discoveries indicating that the cerebellum implements supervised learning using the following organizational principles: ( a) extensive preprocessing of input representations (i.e., feature engineering), ( b) massively recurrent circuit architecture, ( c) linear input–output computations, ( d) sophisticated instructive signals that can be regulated and are predictive, ( e) adaptive mechanisms of plasticity with multiple timescales, and ( f) task-specific hardware specializations. The principles emerging from studies of the cerebellum have striking parallels with those in other brain areas and in artificial neural networks, as well as some notable differences, which can inform future research on supervised learning and inspire next-generation machine-based algorithms.