Published in

Massachusetts Institute of Technology Press, Neural Computation, 2(9), p. 441-460, 1997

DOI: 10.1162/neco.1997.9.2.441

Links

Tools

Export citation

Search in Google Scholar

Average-Case Learning Curves for Radial Basis Function Networks

Journal article published in 1997 by Sean B. Holden, Mahesan Niranjan
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Orange circle
Published version: archiving restricted
Data provided by SHERPA/RoMEO

Abstract

The application of statistical physics to the study of the learning curves of feedforward connectionist networks has to date been concerned mostly with perceptron-like networks. Recent work has extended the theory to networks such as committee machines and parity machines, and an important direction for current and future research is the extension of this body of theory to further connectionist networks. In this article, we use this formalism to investigate the learning curves of gaussian radial basis function networks (RBFNs) having fixed basis functions. (These networks have also been called generalized linear regression models.) We address the problem of learning linear and nonlinear, realizable and unrealizable, target rules from noise-free training examples using a stochastic training algorithm. Expressions for the generalization error, defined as the expected error for a network with a given set of parameters, are derived for general gaussian RBFNs, for which all parameters, including centers and spread parameters, are adaptable. Specializing to the case of RBFNs with fixed basis functions (basis functions having parameters chosen without reference to the training examples), we then study the learning curves for these networks in the limit of high temperature.