katerakelly / oyster

Implementation of Efficient Off-policy Meta-learning via Probabilistic Context Variables (PEARL)
MIT License
472 stars 125 forks source link

Cannot train with GPU for RTX2080 #11

Closed duongnhatthang closed 4 years ago

duongnhatthang commented 4 years ago

I installed it as instructed. My machine is Ubuntu 16.04, RTX2080Ti. After I installed the conda env create -f environment.yml, the project worked correctly if using CPU. For gpu, I noticed that the pytorch point to cuda 9.0.176 (type torch.version.cuda in terminal), even though I installed cuda 10.2 in the machine (showed in nvidia-smi) and added cuda path in the LD_LIBRARY_PATH. It seems RTX 2080 Ti is a Turing GPU, which need CUDA 10 to work. It probably works well on other machines though. Can you take a look at this?

katerakelly commented 4 years ago

Yes it was using CUDA-9. I updated the conda env to use CUDA-10 now.