Dissemin is shutting down on January 1st, 2025

Published in

2009 International Conference on Mechatronics and Automation

DOI: 10.1109/icma.2009.5246519

Links

Tools

Export citation

Search in Google Scholar

Common Nature of Learning between BP and Hopfield-Type Neural Networks for Convex Quadratic Minimization with Simplified Network Models

Proceedings article published in 2009 by Yunong Zhang, Yanyan Shi, Binghuang Cai ORCID, Zhan Li, Chenfu Yi, Jianzhang Mai
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In this paper, two different types of neural networks are investigated and employed for the online solution of strictly-convex quadratic minimization; i.e., a two-layer back-propagation neural network (BPNN) and a discrete-time Hopfield-type neural network (HNN). As simplified models, their error-functions could be defined directly as the quadratic objective function, from which we further derive the weight-updating formula of such a BPNN and the state-transition equation of such an HNN. It is shown creatively that the two derived learning-expressions turn out to be the same (in mathematics), although the presented neural-networks are evidently different from each other a great deal, in terms of network architecture, physical meaning and training patterns. Computer-simulations further substantiate the efficacy of both BPNN and HNN models on convex quadratic minimization and, more importantly, their common nature of learning.