Published in

The 2010 International Joint Conference on Neural Networks (IJCNN)

DOI: 10.1109/ijcnn.2010.5596497

Links

Tools

Export citation

Search in Google Scholar

Fusing bottom-up and top-down pathways in neural networks for visual object recognition

Proceedings article published in 2010 by Yuhua Zheng, Yan Meng, Yaochu Jin ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In this paper, an artificial neural network model is built up with two pathways: bottom-up sensory-driven pathway and top-down expectation-driven pathway, which are fused to train the neural network for visual object recognition. During the supervised learning process, the bottom-up pathway generates hypotheses as network outputs. Then target label will be applied to update the bottom-up connections. On the other hand, the hypotheses generated by the bottom-up pathway will produce expectations on the sensory input through the top-down pathway. The expectations will be constrained by the real data from the sensory input which can be used to update the top-down connections accordingly. This two-pathway based neural network can also be applied to semi-supervised learning with both labeled and unlabeled data, where the network is able to generate hypotheses and corresponding expectations. Experiments on visual object recognition suggest that the proposed neural network model is promising to recover the object for the cases with missing data in sensory inputs.