Published in

Institute of Electrical and Electronics Engineers, IEEE Transactions on Neural Networks, 5(18), p. 1294-1305, 2007

DOI: 10.1109/tnn.2007.894058

Links

Tools

Export citation

Search in Google Scholar

Localized Generalization Error Model and Its Application to Architecture Selection for Radial Basis Function Neural Network

Journal article published in 2007 by Daniel S. Yeung, Wing W. Y. Ng, Defeng Wang, Eric C. C. Tsang, Xi-Zhao Wang
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

The generalization error bounds found by current error models using the number of effective parameters of a classifier and the number of training samples are usually very loose. These bounds are intended for the entire input space. However, support vector machine (SVM), radial basis function neural network (RBFNN), and multilayer perceptron neural network (MLPNN) are local learning machines for solving problems and treat unseen samples near the training samples to be more important. In this paper, we propose a localized generalization error model which bounds from above the generalization error within a neighborhood of the training samples using stochastic sensitivity measure. It is then used to develop an architecture selection technique for a classifier with maximal coverage of unseen samples by specifying a generalization error threshold. Experiments using 17 University of California at Irvine (UCI) data sets show that, in comparison with cross validation (CV), sequential learning, and two other ad hoc methods, our technique consistently yields the best testing classification accuracy with fewer hidden neurons and less training time.