Dissemin is shutting down on January 1st, 2025

Published in

14th International Conference on Image Analysis and Processing (ICIAP 2007)

DOI: 10.1109/iciap.2007.4362759

Links

Tools

Export citation

Search in Google Scholar

Sparseness Achievement in Hidden Markov Models

Proceedings article published in 2007 by Manuele Bicego, Marco Cristani, Vittorio Murino ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

In this paper, a novel learning algorithm for Hidden Markov Models (HMMs) has been devised. The key issue is the achievement of a sparse model, i.e., a model in which all irrelevant parameters are set exactly to zero. Alternatively to standard maximum likelihood estimation (Baum Welch training), in the proposed approach the parameters estimation problem is cast into a Bayesian framework, with the introduction of a negative Dirichlet prior, which strongly encourages sparseness of the model. A modified Expectation Maximization algorithm has been devised, able to determine a MAP (maximum a posteriori probability) estimate of HMM parameters in this Bayesian formulation. Theoretical considerations and experimental comparative evaluations on a 2D shape classification task contribute to validate the proposed technique.