Closed benxro closed 2 years ago
Hi Ben - thanks for finding this.
We have had plenty of stability issues with GP hyperparameter optimization with GaussianProcesses.jl
. This is the primary reason we still keep the python alternative (scikit-learn) which is typically is far more robust. However I'll add the option to pass parameters to Optim
- it seems like we can use an args
, kwargs
setup that GaussianProcesses.jl
expects here:
https://github.com/STOR-i/GaussianProcesses.jl/blob/c226196ccbe5117b5a3f32c036178a450c024eb2/src/optimize.jl#L19-L37
The only annoyance I can see with this function is they use the default on a positional argument for the method - which makes passing args
more difficult if we do not wish to have CES dependency on Optim
Great, thank you very much for fixing this! :)
Sometimes when I run the Emulate-Step I get the error message from the title with the stacktrace below.
This sounds very similar to the problems encountered here and here. The problem seems to be the HagerZhang line search algorithm that is used by default by Optim.optimize(). The proposed solution was to choose a different line search algorithm. However, I did not find an option to do so from the perspective of the CES package.
I'm new to Julia but I tried adapting the function "optimize_hyperparameters()" in src/GaussianProcess.jl but couldn't manage to pass an alternative line search algorithm down to the Optim.optimize() method.
Is there currently the option to change the line search algorithm from within the CES package? Is this a known problem and are there ways to prevent this (e.g. preprocessing or filtering the data)?
Unfortunately, I'm not able to produce a minimal working example but could upload a data container for which this happens. Also, referring to my "sometimes" earlier this happens deterministically. it's just not clear to me when it happens as it does not seem to depend on the number of training points or certain outliers.
With kind regards Ben