Hi, I’m a student and learning about BayesianOptimization rencently. I’m trying to make fabolas compactible to George 0.3.1 and I think I did it. And I hope I can give you some suggestions:
I suggest that using stationary kernel (E.g. SE kernel) instead of non-stationary kernel (LinearKernel in Fabolas.py), because when you run get_incumbent() (in Fabolas.py), you will project the environment variables to 1, and then change to 0 because of _quadratic_bf(). Then you will run predict() in get_incumbent(), and the parameters of predict() will be matrix with env=0 (E.g. (a1,b1,0) ,(a2,b2,0) ,(a3,b3,0)...)
If you use LinearKernel, the var of predict() will be a zeroes, and mean will of predict() will be a vector with same elements.As a result, the epmgp.py cannot work
The parameter of EnvPrior() “n_lr=degree+1” can change to “n_lr= len(env_kernel) “
Hi, I’m a student and learning about BayesianOptimization rencently. I’m trying to make fabolas compactible to George 0.3.1 and I think I did it. And I hope I can give you some suggestions:
I suggest that using stationary kernel (E.g. SE kernel) instead of non-stationary kernel (LinearKernel in Fabolas.py), because when you run get_incumbent() (in Fabolas.py), you will project the environment variables to 1, and then change to 0 because of _quadratic_bf(). Then you will run predict() in get_incumbent(), and the parameters of predict() will be matrix with env=0 (E.g. (a1,b1,0) ,(a2,b2,0) ,(a3,b3,0)...) If you use LinearKernel, the var of predict() will be a zeroes, and mean will of predict() will be a vector with same elements.As a result, the epmgp.py cannot work
The parameter of EnvPrior() “n_lr=degree+1” can change to “n_lr= len(env_kernel) “