BayesWatch / deep-kernel-transfer

Official pytorch implementation of the paper "Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels" (NeurIPS 2020)
https://arxiv.org/abs/1910.05199
197 stars 29 forks source link

Multi GPU Training #7

Closed surtantheta closed 2 years ago

surtantheta commented 2 years ago

Can you provide the code or let me know how we can include Multi GPU for training this model? Due to ResNet + GP structure, I am unable to directly implement that.

mpatacchiola commented 2 years ago

Hi @surtantheta

Our code does not implement the multi GPU at the moment, mainly because of the limits in the parallelization of the GP in GPytorch.

However, recent versions of GPytorch support multi GPU. Give a look at this example. It seems that you just need to use gpytorch.kernels.MultiDeviceKernel() to spread the kernel over multiple devices.