Published in

Springer, Lecture Notes in Computer Science, p. 463-472, 2010

DOI: 10.1007/978-3-642-14980-1_45

Links

Tools

Export citation

Search in Google Scholar

Information Theoretical Kernels for Generative Embeddings Based on Hidden Markov Models

This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Red circle
Preprint: archiving forbidden
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Many approaches to learning classi¯ers for structured objects (e.g., shapes) use generative models in a Bayesian framework. However, state-of-the-art classi¯ers for vectorial data (e.g., support vector ma- chines) are learned discriminatively. A generative embedding is a map- ping from the object space into a ¯xed dimensional feature space, induced by a generative model which is usually learned from data. The ¯xed di- mensionality of these feature spaces permits the use of state of the art discriminative machines based on vectorial representations, thus bringing together the best of the discriminative and generative paradigms. Using a generative embedding involves two steps: (i) de¯ning and learn- ing the generative model used to build the embedding; (ii) discrimina- tively learning a (maybe kernel) classi¯er on the adopted feature space. The literature on generative embeddings is essentially focused on step (i), usually adopting some standard o®-the-shelf tool (e.g., an SVM with a linear or RBF kernel) for step (ii). In this paper, we follow a di®er- ent route, by combining several Hidden Markov Models-based generative embeddings (including the classical Fisher score) with the recently pro- posed non-extensive information theoretic kernels. We test this method- ology on a 2D shape recognition task, showing that the proposed method is competitive with the state-of-art.