Dissemin is shutting down on January 1st, 2025

Published in

Institute of Electrical and Electronics Engineers, IEEE Transactions on Neural Networks, 5(22), p. 673-686, 2011

DOI: 10.1109/tnn.2011.2109736

Links

Tools

Export citation

Search in Google Scholar

Reduced HyperBF Networks: Regularization by Explicit Complexity Reduction and Scaled Rprop-Based Training

Journal article published in 2011 by Rami N. Mahdi, Eric Christian Rouchka ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Hyper basis function (HyperBF) networks are generalized radial basis function neural networks (where the activation function is a radial function of a weighted distance. Such generalization provides HyperBF networks with high capacity to learn complex functions, which in turn make them susceptible to overfitting and poor generalization. Moreover, training a HyperBF network demands the weights, centers, and local scaling factors to be optimized simultaneously. In the case of a relatively large dataset with a large network structure, such optimization becomes computationally challenging. In this paper, a new regularization method that performs soft local dimension reduction in addition to weight decay is proposed. The regularized HyperBF network is shown to provide classification accuracy competitive to a support vector machine while requiring a significantly smaller network structure. Furthermore, a practical training to construct HyperBF networks is presented. Hierarchal clustering is used to initialize neurons followed by a gradient optimization using a scaled version of the Rprop algorithm with a localized partial backtracking step. Experimental results on seven datasets show that the proposed training provides faster and smoother convergence than the regular Rprop algorithm.