fastai / docker-containers

Docker images for fastai
https://hub.docker.com/u/fastai
Apache License 2.0
173 stars 41 forks source link

fastai/fastai:latest, CUDA & cuDNN Unavailable #74

Open LTeder opened 2 years ago

LTeder commented 2 years ago

Hello (again). I'm using the fastai:latest container for some neural network inferencing. (I see it was updated yesterday.) I seem to be unable to access my GPU through the container. I'm on a laptop with a GTX 3070 Max-Q and a Ryzen 9 5900HS, with Docker running on WSL2 Debian. Here is a sample Dockerfile:

FROM fastai/fastai:latest

RUN pip install --no-cache-dir --upgrade pip \
 && pip install --no-cache-dir onnxruntime-gpu opencv-python-headless

ENTRYPOINT ["/bin/bash", "-c"]

I test this using nvidia-smi and python -c "import torch; print(torch.cuda.is_available(), torch.backends.cudnn.is_available())". The Nvidia tool properly detects my hardware and the correct versions of the drivers as they are on Windows 11, but the Python statements both return False. (The pip installs may be omitted, producing the same result.)

The following containers work without modification:

docker run --rm --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
docker run -it --rm --gpus all --entrypoint /bin/bash pytorch/torchserve:latest-gpu

These show the same nvidia-smi output with the Python statements returning True.

After some searching yesterday, I figure that my fastai container has duplicate versions of some Nvidia driver(s). I will update this if I find a solution. Any suggestions or tips are appreciated.