Published in

Taylor and Francis Group, Network: Computation in Neural Systems, 4(8), p. 441-452

DOI: 10.1088/0954-898x/8/4/006

Taylor and Francis Group, Network: Computation in Neural Systems, 4(8), p. 441-452

DOI: 10.1088/0954-898x_8_4_006

Links

Tools

Export citation

Search in Google Scholar

Unsupervised discovery of invariances

Journal article published in 1997 by Stephen Eglen ORCID, Alistair Bray, Jim Stone
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Red circle
Preprint: archiving forbidden
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

.<F3.733e+05> The grey level profiles of adjacent image regions tend to be different, whilst the `hidden' physical parameters associated with these regions (e.g. surface depth, edge orientation) tend to have similar values. We demonstrate that a network in which adjacent units receive inputs from adjacent image regions learns to code for hidden parameters. The learning rule takes advantage of the spatial smoothness of physical parameters in general to discover particular parameters embedded in grey level profiles which vary rapidly across an input image. We provide examples in which networks discover stereo disparity and feature orientation as invariances underlying image data.<F3.74e+05> 1. Introduction<F3.733e+05> A crucial requirement for an intelligent system operating in a complex environment is that it can `see the wood for the trees', i.e. it can determine the significant `hidden' parameters underlying large streams of confusing input data. This problem is confronted by a child...