Closed Moyoxkit closed 3 years ago
It's possible that the current problems are caused by undercover kernel magic (i.e. the same kernel is used for each bin instead of a new one). I'll look into this.
To debug I added the following two print statements:
print(gaussian_process.get_parameter_vector())
# Optimize the hyperparameter values in the emulator
result = minimize(
fun=negative_log_likelihood,
x0=gaussian_process.get_parameter_vector(),
jac=grad_negative_log_likelihood,
)
# Load in the optimal hyperparameters
print(result.x)
gaussian_process.set_parameter_vector(result.x)
Each bin should get a separate new GP to work with, so the expected output is (pseudo):
Array with zeros
Array with fit
Array with zeros
Array with fit
etc.
This is not what is currently happening:
[-1.60943791 0. 0. 0. 0. 0. ]
[-1.78753308 1.67716201 0.65840285 1.08828067 3.34462276 -0.26365946]
[-1.78753308 1.67716201 0.65840285 1.08828067 3.34462276 -0.26365946]
[-1.64943059 2.22835885 0.87401749 2.0874703 3.16535898 -0.60355372]
[-1.64943059 2.22835885 0.87401749 2.0874703 3.16535898 -0.60355372]
[-1.22373542 4.14311626 -0.67730976 3.67971948 3.92554453 -0.90429074]
[-1.22373542 4.14311626 -0.67730976 3.67971948 3.92554453 -0.90429074]
[-1.41765056 4.18653879 -1.54350733 2.37199912 3.836176 -0.27080644]
...
Meaning the kernel get's carried trough for each bin. So the same kernel is used for all bins in the end. This also explains why only the boost factor was doing anything in my results. If it has the fit for the last bin for all of them, only the boost factor will have a scale that makes it relevant. Do you have any idea how to solve this?
It works now, there was still the check for If kernel is None
which should not be there in the binned case of course. It's ready for a proper formatting/styling check now.
Maybe you can see if there is a solution to this, at the moment the GP is doing nothing, but the linear and polynomial model can already describe the bins a bit. Might be a mistake somewhere.