Closed surtantheta closed 3 years ago
Hi @surtantheta
Our code does not implement the multi GPU at the moment, mainly because of the limits in the parallelization of the GP in GPytorch.
However, recent versions of GPytorch support multi GPU. Give a look at this example. It seems that you just need to use gpytorch.kernels.MultiDeviceKernel()
to spread the kernel over multiple devices.
Can you provide the code or let me know how we can include Multi GPU for training this model? Due to ResNet + GP structure, I am unable to directly implement that.