Open rimaabaidi opened 3 years ago
All GP parameters are just torch Parameters. There's no meaningful distinction between hyperparameters and the other parameters you're going to define from a practical perspective, calling module.parameters()
or module.named_parameters()
gets you all parameters on that Module and all submodules. Any mechanism you can use in standard PyTorch to access parameters will work here.
Thank you for your answer and sorry for the late reponse. I didn't have to do anything for the weights, they were the outputscales of the ScaleKernel, which are automatically optimzed. But now I run into a problem when I combine my model with BoTorch. The error is: " test_train_covar = test_train_covar.evaluate_kernel() AttributeError: 'Tensor' object has no attribute 'evaluate_kernel'" So the way I did it , was to have a function that return the final kernel (additive or product kernel of complex kernels) and I pass this as my covar-module. In the docs, it says that the method "evaluate" should be used on kernel object, but I am not sure how and where. Am I missing something? Let me know if you need my code.
Thank you for your answer and sorry for the late reponse. I didn't have to do anything for the weights, they were the outputscales of the ScaleKernel, which are automatically optimzed. But now I run into a problem when I combine my model with BoTorch. The error is: " test_train_covar = test_train_covar.evaluate_kernel() AttributeError: 'Tensor' object has no attribute 'evaluate_kernel'" So the way I did it , was to have a function that return the final kernel (additive or product kernel of complex kernels) and I pass this as my covar-module. In the docs, it says that the method "evaluate" should be used on kernel object, but I am not sure how and where. Am I missing something? Let me know if you need my code. Hello,
Could you please provide your code? I am also interested in using this model for BayesOpt.
Br, Jimmy
@rima2992 could you try wrapping your implementation with the context manager with gpytorch.settings.lazily_evaluate_kernel(False):
Hello, I would like to implement a neural network, where I pass kernels to the layers, that then perform a linear combination of the kernels or multiplication. The procedure is described in this paper: https://arxiv.org/abs/1806.04326 I think I have an idea how I should do this (still a bit vague though), but my question is, how do I pass the weights of the linear layer as parameters in the GP model and also the hyperparameters of the resulting kernel? I mean, for the primitive kernels, how the parameters are passed to the optimizer is already implemented. But in my case, should I keep track of the hyperparameters myself? I hope I am clear. I would appreciate your help !