wujian16 / Cornell-MOE

A Python library for the state-of-the-art Bayesian optimization algorithms, with the core implemented in C++.
Other
261 stars 64 forks source link

unclear behavior of horseshoe prior #85

Open lcnature opened 5 years ago

lcnature commented 5 years ago

Hi, Thanks for the great package! I was trying to understand the default prior placed on noise standard deviation (I suppose that is what the last few hyper-parameters mean in the code? When I played around with the scale parameter of the implemented horseshoe prior, I think the distribution and the generated sample do not seem to match. Here are the codes to generate some plots and you can see while the peak of the historgram of the generated sample is between -3 and -2, the log probability of the distribution still peaks at 0.

import matplotlib.pyplot as plt
import numpy as np
from moe.optimal_learning.python.base_prior import HorseshoePrior
horseshoe = HorseshoePrior(0.1)
x = horseshoe.sample_from_prior(1000)
plt.hist(x, 20)
plt.show()
plt.plot(x, horseshoe.lnprob(x), '.')
plt.show()

I am worried this may influence the behavior of the algorithm. Any thought or any suggestion for alternative prior for the noise variance?