Published in

Elsevier, NeuroImage, 4(38), p. 677-695, 2007

DOI: 10.1016/j.neuroimage.2007.07.032

Links

Tools

Export citation

Search in Google Scholar

Diffusion-based spatial priors for imaging

This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

We describe a Bayesian scheme to analyze images, which uses spatial priors encoded by a diffusion kernel, based on a weighted graph Laplacian. This provides a general framework to formulate a spatial model, whose parameters can be optimized. The application we have in mind is a spatiotemporal model for imaging data. We illustrate the method on a random effects analysis of fMRI contrast images from multiple subjects; this simplifies exposition of the model and enables a clear description of its salient features. Typically, imaging data are smoothed using a fixed Gaussian kernel as a pre-processing step before applying a mass-univariate statistical model (e.g., a general linear model) to provide images of parameter estimates. An alternative is to include smoothness in a multivariate statistical model (Penny, W.D., Trujillo-Barreto, N.J., Friston, K.J., 2005. Bayesian fMRI time series analysis with spatial priors. Neuroimage 24, 350-362). The advantage of the latter is that each parameter field is smoothed automatically, according to a measure of uncertainty, given the data. In this work, we investigate the use of diffusion kernels to encode spatial correlations among parameter estimates. Nonlinear diffusion has a long history in image processing; in particular, flows that depend on local image geometry (Romeny, B.M.T., 1994. Geometry-driven Diffusion in Computer Vision. Kluwer Academic Publishers) can be used as adaptive filters. This can furnish a non-stationary smoothing process that preserves features, which would otherwise be lost with a fixed Gaussian kernel. We describe a Bayesian framework that incorporates non-stationary, adaptive smoothing into a generative model to extract spatial features in parameter estimates. Critically, this means adaptive smoothing becomes an integral part of estimation and inference. We illustrate the method using synthetic and real fMRI data.