Installing PyTorch with GPU support via Conda as we do it currently generates huge images, since the binaries are pre-built with CUDA support. As we use a base image with CUDA already set-up, we could build PyTorch from the sources instead so that it uses the locally shipped CUDA toolkit instead.
Installing PyTorch with GPU support via Conda as we do it currently generates huge images, since the binaries are pre-built with CUDA support. As we use a base image with CUDA already set-up, we could build PyTorch from the sources instead so that it uses the locally shipped CUDA toolkit instead.
See this discussion : https://discuss.pytorch.org/t/cudnn-vs-cudatoolkit/154164/2