Published in

Massachusetts Institute of Technology Press, Neural Computation, 7(28), p. 1289-1304, 2016

DOI: 10.1162/neco_a_00849

Links

Tools

Export citation

Search in Google Scholar

A single hidden layer feedforward network with only one neuron in the hidden layer can approximate any univariate function

Journal article published in 2015 by Namig J. Guliyev ORCID, Vugar E. Ismailov
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Orange circle
Published version: archiving restricted
Data provided by SHERPA/RoMEO

Abstract

The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this note, we consider constructive approximation on any finite interval of [Formula: see text] by neural networks with only one neuron in the hidden layer. We construct algorithmically a smooth, sigmoidal, almost monotone activation function [Formula: see text] providing approximation to an arbitrary continuous function within any degree of accuracy. This algorithm is implemented in a computer program, which computes the value of [Formula: see text] at any reasonable point of the real axis.