Dissemin is shutting down on January 1st, 2025

Published in

MDPI, Algorithms, 12(13), p. 327, 2020

DOI: 10.3390/a13120327

Links

Tools

Export citation

Search in Google Scholar

Feasibility of Kd-Trees in Gaussian Process Regression to Partition Test Points in High Resolution Input Space

Journal article published in 2020 by Ivan De Boi ORCID, Bart Ribbens ORCID, Pieter Jorissen ORCID, Rudi Penne
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

Bayesian inference using Gaussian processes on large datasets have been studied extensively over the past few years. However, little attention has been given on how to apply these on a high resolution input space. By approximating the set of test points (where we want to make predictions, not the set of training points in the dataset) by a kd-tree, a multi-resolution data structure arises that allows for considerable gains in performance and memory usage without a significant loss of accuracy. In this paper, we study the feasibility and efficiency of constructing and using such a kd-tree in Gaussian process regression. We propose a cut-off rule that is easy to interpret and to tune. We show our findings on generated toy data in a 3D point cloud and a simulated 2D vibrometry example. This survey is beneficial for researchers that are working on a high resolution input space. The kd-tree approximation outperforms the naïve Gaussian process implementation in all experiments.