Dissemin is shutting down on January 1st, 2025

Published in

Springer Verlag, Theory and Decision, 1(71), p. 33-62

DOI: 10.1007/s11238-010-9219-2

Links

Tools

Export citation

Search in Google Scholar

PRM inference using Jaffray & Faÿ’s Local Conditioning

Journal article published in 2010 by Christophe Gonzales, Pierre-Henri Wuillemin ORCID
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

Probabilistic Relational Models (PRMs) are a framework for compactly representing uncertainties (actually probabilities). They result from the combination of Bayesian Networks (BNs), Object-Oriented languages, and relational models. They are specifically designed for their efficient construction, maintenance and exploitation for very large scale problems, where BNs are known to perform poorly. Actually, in large-scale problems, it is often the case that BNs result from the combination of patterns (small BN fragments) repeated many times. PRMs exploit this feature by defining these patterns only once (the so-called PRM’s classes) and using them through multiple instances, as prescribed by the Object-Oriented paradigm. This design induces low construction and maintenance costs. In addition, by exploiting the classes’ structures, PRM’s state-of-the-art inference algorithm “Structured Variable Elimination” (SVE) significantly outperforms BN’s classical inference algorithms (e.g., Variable Elimination, VE; Local Conditioning, LC). SVE is actually an extension of VE that simply exploits classes to avoid redundant computations. In this article, we show that SVE can be enhanced using LC. Although LC is often thought as being outperformed by VE-like algorithms in BNs, we do think that it should play an important role for PRMs because its features are very well suited for best exploiting PRM classes. In this article, relying on Faÿ and Jaffray’s works, we show how LC can be used in conjunction with VE and deduce an extension of SVE that outperforms it for large-scale problems. Numerical experiments highlight the practical efficiency of our algorithm.