Published in

2010 IEEE International Conference on Image Processing

DOI: 10.1109/icip.2010.5651831

Links

Tools

Export citation

Search in Google Scholar

Combining free energy score spaces with information theoretic kernels: Application to scene classification

This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Most approaches to learn classifiers for structured objects (e.g., images) use generative models in a classical Bayesian framework. However, state-of-the-art classifiers for vectorial data (e.g., support vector machines) are learned discriminatively. A generative embedding is a mapping from the object space into a fixed dimensional score space, induced by a generative model, usually learned from data. The fixed dimensionality of these generative score spaces makes them adequate for discriminative learning of classifiers, thus bringing together the best of the discriminative and generative paradigms. In particular, it was recently shown that this hybrid approach outperforms a classifier obtained directly for the generative model upon which the score space was built. Using a generative embedding involves two steps: (i) defining and learning the generative model and using it to build the embedding; (ii) discriminatively learning a (maybe kernel) classifier on the adopted score space. The literature on generative embeddings is essentially focused on step (i), usually using some standard off-the-shelf tool for step (ii). In this paper, we adopt a different approach, by focusing also on the discriminative learning step. In particular, we combine two very recent and top performing tools in each of the steps: (i) the free energy score space; (ii) non-extensive information theoretic kernels. In this paper, we apply this methodology in scene recognition. Experimental results on two benchmark datasets shows that our approach yields state-of-the-art performance.