Published in

American Physical Society, Physical review E: Statistical, nonlinear, and soft matter physics, 2(57), p. 2170-2176

DOI: 10.1103/physreve.57.2170

Links

Tools

Export citation

Search in Google Scholar

Learning with regularizers in multilayer neural networks

Journal article published in 1998 by David Saad, Magnus Rattray ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

We study the effect of regularization in an on-line gradient-descent learning scenario for a general two-layer student network with an arbitrary number of hidden units. Training examples are randomly drawn input vectors labelled by a two-layer teacher network with an arbitrary number of hidden units which may be corrupted by Gaussian output noise. We examine the effect of weight decay regularization on the dynamical evolution of the order parameters and generalization error in various phases of the learning process, in both noiseless and noisy scenarios.