Slice sampling
This post is a demonstration of slice sampling. The goal is to sample from a target distribution f(x). A slice variable, u, is added where the joint density is p(u,x) = I(0<u<f(x)) which maintains the marginal distribution for x. A Gibbs sampler is constructed with the following steps
-
u x ~ Unif(0,f(x)) -
x u ~ Unif over the set A where A={x:u<f(x)}.
The rest of this post considers situations where A is relatively easy to find, e.g. monotone functions f(x) over the support of x and unimodal functions.
Here is a generic function (okay, it probably isn’t that generic, but it works for the purpose below) to implement slice sampling. It takes a number of draws n, an initial x, the target density, and a function A that returns the interval A, so only unimodal targets supported. The function returns the samples x and u (for demonstration).
In this first example, the target distribution is Exp(1). The set A is just the interval (0,-log(u)) where u is the current value of the slice variable.
Here is a demonstration of a single step of the slice sampler. Given a coordinate (u,x), the sampler first draws from u | x which is equivalent to slicing the density vertically at x and drawing uniformly on the y-axis from the interval where u<f(x) and then draws x | u which is equivalent ot slicing the density horizontally at u and drawing uniformly on the x-axis from the interval where u<f(x). |
Here are 9 steps, 10 samples including the intial, of the slice sampler:
And this is comparing the marginal draws for x to the truth.
Now I turn to a standard normal distribution where we pretend we cannot invert the density and therefore need another approach. The approach is going to be to use numerical methods to find the interval A (so, again assuming a unimodal target). The magic happens below where the uniroot
function is used to find the endpoints of the interval.
Run the sampler for a standard normal target.
This is what one step looks like.
Or 9 steps:
And the fit to the target distribution.
Using an unnormalized density
Slice sampling can also be performed on the unnormalized density.
Learning the truncation points
Here is an alternative version of the slice sampler that samples from a distribution other than uniform. The main purpose of introducing this is in Bayesian inference where the target distribution is the posterior, p(x | y) \propto p(y | x) p(x), and the augmentation is p(u,x)\propto p(x) I(0<u<p(y | x)). So the conditional for x | u is a truncated version of the prior. |
The truncation points are determined by u<p(y | x) and the truncation points are not assumed to be known. Instead, they are learned through proposed values that are rejected. So, initially we sample from p(x)I(-Inf<0<Inf) (if x has support on the whole real line) and then adjust to p(x)I(LB<x<UB) where LB and UB are updated depending on whether the proposed value that was rejected was larger or smaller than the current value for x. If the proposed value is larger than the current, then UB becomes the proposed value and if it is smaller than LB becomes the proposed value. |
This is run on the model y | x ~ N(x,1) and x~N(0,1) and y=1 is observed. The true posterior is N(y/2,1/2). |
blog comments powered by Disqus