cornellius-gp / gpytorch

A highly efficient implementation of Gaussian Processes in PyTorch
MIT License
3.46k stars 546 forks source link

How to define customized kernels to enable GPU computation? #2409

Closed cookbook-ms closed 9 months ago

cookbook-ms commented 9 months ago

Discussed in https://github.com/cornellius-gp/gpytorch/discussions/2408

Originally posted by **cookbook-ms** September 18, 2023 Below is how I define my customized kernel. It works on CPU, but it doesn't on GPU. I wonder what I need to change or add to enable GPU computation? I tested on other predefined kernels with my code, which do work, so it's likely the kernel class issue. ```python import numpy as np from scipy import sparse from scipy.sparse import linalg import scipy import torch import math from gpytorch.kernels import Kernel, ScaleKernel from gpytorch.constraints import Positive, Interval class Kernel(Kernel): def __init__(self, laplacians, kappa_bounds=(1e-5,1e5)): super().__init__() self.L1, self.L1_down, self.L1_up = laplacians # register the raw parameters self.register_parameter( name='raw_kappa_down', parameter=torch.nn.Parameter(torch.zeros(1,1)) ) self.register_parameter( name='raw_kappa_up', parameter=torch.nn.Parameter(torch.zeros(1,1)) ) # set the kappa constraints self.register_constraint( 'raw_kappa_down', Interval(*kappa_bounds) ) self.register_constraint( 'raw_kappa_up', Interval(*kappa_bounds) ) # we do not set the prior on the parameters # set up the actual parameters @property def kappa_down(self): return self.raw_kappa_down_constraint.transform(self.raw_kappa_down) @kappa_down.setter def kappa_down(self, value): self._set_kappa_down(value) def _set_kappa_down(self, value): if not torch.is_tensor(value): value = torch.as_tensor(value).to(self.raw_kappa_down) self.initialize(raw_kappa_down=self.raw_kappa_down_constraint.inverse_transform(value)) @property def kappa_up(self): return self.raw_kappa_up_constraint.transform(self.raw_kappa_up) @kappa_up.setter def kappa_up(self, value): self._set_kappa_up(value) def _set_kappa_up(self, value): if not torch.is_tensor(value): value = torch.as_tensor(value).to(self.raw_kappa_up) self.initialize(raw_kappa_up=self.raw_kappa_up_constraint.inverse_transform(value)) def _eval_covar_matrix(self): """Define the full covariance matrix -- full kernel matrix as a property to avoid repeative computation of the kernel matrix""" K1 = torch.linalg.matrix_exp(- (self.kappa_down*self.L1_down + self.kappa_up*self.L1_up)) return K1 @property def covar_matrix(self): return self._eval_covar_matrix() # define the kernel function def forward(self, x1, x2=None, **params): x1, x2 = x1.long(), x2.long() x1 = x1.squeeze(-1) x2 = x2.squeeze(-1) # compute the kernel matrix if x2 is None: x2 = x1 # c = self.covar_matrix # a = c[x1,:] # b = a[:,x2] return self.covar_matrix[x1,:][:,x2] ```
Balandat commented 9 months ago

Closing out in favor of the discussion.