Published in

Springer, Lecture Notes in Computer Science, p. 56-87, 2005

DOI: 10.1007/11559887_4

Links

Tools

Export citation

Search in Google Scholar

Extensions of the Informative Vector Machine.

Proceedings article published in 2004 by Neil D. Lawrence ORCID, John C. Platt, Michael I. Jordan ORCID
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Red circle
Preprint: archiving forbidden
Orange circle
Postprint: archiving restricted
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

The informative vector machine (IVM) is a practical method for Gaussian process regression and classification. The IVM produces a sparse approximation to a Gaussian process by combining assumed density filtering with a heuristic for choosing points based on minimizing posterior entropy. This paper extends IVM in several ways. First, we propose a novel noise model that allows the IVM to be applied to a mixture of labeled and unlabeled data. Second, we use IVM on a block- diagonal covariance matrix, for "learning to learn" from related tasks. Third, we modify the IVM to incorporate prior knowledge from known invariances. All of these extensions are tested on artificial and real data.