Closed muammar closed 2 days ago
This should work if you add the new params to the __init__
method of your module. This is because skorch will pass along the module arguments to the __init__
of the module.
import gpytorch
import torch
from skorch.probabilistic import ExactGPRegressor
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, likelihood, lengthscale=483, variance=2000): # <==
super().__init__(
train_inputs=None,
train_targets=None,
likelihood=likelihood,
)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(
gpytorch.kernels.RBFKernel() + gpytorch.kernels.LinearKernel()
)
self.covar_module.base_kernel.kernels[0].lengthscale = lengthscale # <==
self.covar_module.base_kernel.kernels[1].variance = variance # <==
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
device = "cuda" if torch.cuda.is_available() else "cpu"
gpr = ExactGPRegressor(
ExactGPModel,
likelihood=likelihood,
criterion=gpytorch.mlls.ExactMarginalLogLikelihood,
optimizer=torch.optim.Adam,
lr=0.1,
max_epochs=5000,
device=device,
batch_size=-1,
module__lengthscale=123,
module__variance=456,
).initialize()
assert gpr.module_.covar_module.base_kernel.kernels[0].lengthscale[0, 0] == 123.
assert gpr.module_.covar_module.base_kernel.kernels[1].variance[0, 0] == 456.
gpr.set_params(module__lengthscale=555, module__variance=3)
assert gpr.module_.covar_module.base_kernel.kernels[0].lengthscale[0, 0] == 555.
assert gpr.module_.covar_module.base_kernel.kernels[1].variance[0, 0] == 3.
When you run the grid search, name the params like this:
param_grid = {"module__lengthscale": [1, 2, 3], "module__variance": [4, 5, 6]}
Of course, you can choose other names in the __init__
, just remember to adjust the grid search names accordingly.
This is extremely helpful. I had noticed the module_
part of the code but did not understand how I could use it to set the parameters I needed. I will give this a try and report back.
This worked very well. Thank you again @BenjaminBossan. I wanted to ask if you have experience with the torch.optim.LBFGS
optimizer. I am giving this a try right now, but I am unsure if I should try https://github.com/hjmshi/PyTorch-LBFGS instead.
Best,
I plan on using skorch and
ExactGPModel
to carry out a grid search with sklearn. My code looks like this:I want to find the best starting point for
self.covar_module.base_kernel.kernels[0].lengthscale
andself.covar_module.base_kernel.kernels[1].variance
that are currently hardcoded. I have tried many different things to make this work. Could you shed some light? I want to set a grid space where I try various values oflengthscale
andvariance
.I really appreciate any help you can provide.