Dissemin is shutting down on January 1st, 2025

Published in

Institute of Electrical and Electronics Engineers, IEEE Signal Processing Letters, 12(18), p. 705-708, 2011

DOI: 10.1109/lsp.2011.2170566

Links

Tools

Export citation

Search in Google Scholar

Von Mises-Fisher models in the total variability subspace for language recognition

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. I. Lopez-Moreno, D. Ramos, J. Gonzalez-Dominguez, and J. Gonzalez-Rodriguez, "Von Mises-Fisher models in the total variability subspace for language recognition", IEEE Signal Processing Letters, vol. 18, no. 12, pp. 705-708, October 2011 ; This letter proposes a new modeling approach for the Total Variability subspace within a Language Recognition task. Motivated by previous works in directional statistics, von Mises-Fisher distributions are used for assigning language-conditioned probabilities to language data, assumed to be spherically distributed in this subspace. The two proposed methods use Kernel Density Functions or Finite Mixture Models of such distributions. Experiments conducted on NIST LRE 2009 show that the proposed techniques significantly outperform the baseline cosine distance approach in most of the considered experimental conditions, including different speech conditions, durations and the presence of unseen languages.