Closed RobertArbon closed 7 years ago
That's really odd. Especially since the GPy version in the omnia channel should include the fix in https://github.com/SheffieldML/GPy/pull/314.
Maybe the best way to handle it is to clip negative values? I'm also not very sure about what to do.
I've run one test again but with the omnia version 1.5.6 (I was using the latest stable version from their repo) and still get negative variance. Unsure about what to do as well.
done in #229
While implementing different acquisition functions for the GP strategy I found that the GPRegression model consistently gives a negative variance prediction.
To test this I put the following print statement next to the model variance prediction:
in both the
master
branch here and in my new branch hereI created five test cases, one
MSMBuilder
and four differentsklearn
examples, including regression, classification and different estimators.You can find the tests and the log files here.
I ran each test case for 20 iterations with
seeds: 5
with (1) your original kernel and (2) aMatern52
kernel.Case (1) - the variance is consistently negative. Case (2) - the variance is consistently positive.
I also ran case (1) with
seeds: 10
but that made no difference.This issue was brought up on the GPy issue tracker, and apparently solved, in 2016. See here. Unfortunately I'm not an expert in Bayesian statistics so haven't yet understood the finer details of the implementation. I suspect this is a problem with GPy itself rather than the suitability of the kernel you're using.