Published in

Society for Industrial and Applied Mathematics, SIAM Journal on Scientific Computing, 4(36), p. A1895-A1910

DOI: 10.1137/140964023

Links

Tools

Export citation

Search in Google Scholar

Randomize-Then-Optimize: A Method for Sampling from Posterior Distributions in Nonlinear Inverse Problems

Journal article published in 2014 by Johnathan M. Bardsley, Antti Solonen, Heikki Haario, Marko Laine
This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

High-dimensional inverse problems present a challenge for Markov chain Monte Carlo (MCMC)-type sampling schemes. Typically, they rely on finding an efficient proposal distribution, which can be difficult for large-scale problems, even with adaptive approaches. Moreover, the autocorrelations of the samples typically increase with dimension, which leads to the need for long sample chains. We present an alternative method for sampling from posterior distributions in nonlinear inverse problems, when the measurement error and prior are both Gaussian. The approach computes a candidate sample by solving a stochastic optimization problem. In the linear case, these samples are directly from the posterior density, but this is not so in the nonlinear case. We derive the form of the sample density in the nonlinear case, and then show how to use it within both a Metropolis-Hastings and importance sampling framework to obtain samples from the posterior distribution of the parameters. We demonstrate, with various small- and medium-scale problems, that randomize-then-optimize can be efficient compared to standard adaptive MCMC algorithms.