Added a zero-centering reparametrization of the prior distributions
Added assertions asserting the prior distributions to be Gaussian
Removed usage of the sample_params routine.
Hi!
I realized a while ago after commiting ess that the current implementation is flawed.
Elliptical slice sampling (ESS) requires the prior distributions to be zero-centered,
however, there was no assertion nor documentation about this fact.
So I added a reparametrization so that the non-zero-centered Gaussian priors become zero centered.
for p(x) = N(\mu, \sigma),
f(x)p(x) -> f(x + \mu)q(x) where q(x) = N(0, \sigma)
This reparameterization is also briefly stated in [1].
Below is the effective sample size before and after the fix.
I ran the code below in the tests directory 30 times,
and computed the mean and 95% confidence intervals.
d, n = 1, 20
ll = rand(d)
X = 2π * rand(d, n)
y = randn(n) .+ 0.5
kern = RQ(-1.0, -1.0, -1.0)
Here are the results.
The effective sample size has been significantly improved.
before
after
noise
183 (135, 232)
255 (198, 312)
kernel2
82 (71, 91)
117 (103, 131)
kernel3
105 (91, 118)
160 (143, 176)
kernel3
77 (70, 85)
113 (102, 125)
Lastly, the sample_params routine was introduced because of ess.
However, I changed the way the hyperparameters are sampled, removing this dependency.
I think though sample_params could be used for other reasons (e.g., randomly choosing initial points for MAP inference) so I didn't remove it.
[1] Nishihara, Robert, Iain Murray, and Ryan P. Adams. "Parallel MCMC with generalized elliptical slice sampling." The Journal of Machine Learning Research 15.1 (2014): 2087-2112.
Changes
sample_params
routine.Hi! I realized a while ago after commiting
ess
that the current implementation is flawed. Elliptical slice sampling (ESS) requires the prior distributions to be zero-centered, however, there was no assertion nor documentation about this fact. So I added a reparametrization so that the non-zero-centered Gaussian priors become zero centered.for p(x) = N(\mu, \sigma), f(x)p(x) -> f(x + \mu)q(x) where q(x) = N(0, \sigma)
This reparameterization is also briefly stated in [1].
Below is the effective sample size before and after the fix. I ran the code below in the tests directory 30 times, and computed the mean and 95% confidence intervals.
Here are the results. The effective sample size has been significantly improved.
Lastly, the
sample_params
routine was introduced because ofess
. However, I changed the way the hyperparameters are sampled, removing this dependency. I think thoughsample_params
could be used for other reasons (e.g., randomly choosing initial points for MAP inference) so I didn't remove it.[1] Nishihara, Robert, Iain Murray, and Ryan P. Adams. "Parallel MCMC with generalized elliptical slice sampling." The Journal of Machine Learning Research 15.1 (2014): 2087-2112.