Published in

Massachusetts Institute of Technology Press, Neural Computation, 2(8), p. 451-460, 1996

DOI: 10.1162/neco.1996.8.2.451

Links

Tools

Export citation

Search in Google Scholar

The Interchangeability of Learning Rate and Gain in Backpropagation Neural Networks

Journal article published in 1996 by Georg Thimm, Perry Moerland ORCID, Emile Fiesler
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Orange circle
Published version: archiving restricted
Data provided by SHERPA/RoMEO

Abstract

The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.