Open archdyn opened 5 years ago
The nvidia (or any other) runtime is not available during the build stage, that is why torch.cuda.is_available()
will always result in False
(https://github.com/NVIDIA/nvidia-docker/issues/595 for example).
So, the proposed workaround (FORCE_CUDA
) is the correct way to handle it.
@denis-sumin @miguelvr @fmassa I've opened a PR with the FORCE_CUDA
flag as an option
By the way, I've tested that such workaround works fine.
@fmassa close this?
@miguelvr I don't think this is solved yet. I still need to do tricks to get it to work with GPU in a docker.
您好@archdyn 我遇到了和您一样的问题
RuntimeError: Not compiled with GPU support (nms at /algo_code/maskrcnn_benchmark/csrc/nms.h:22)
,如何解决? 顺便说一句,我用docker
代替nvidia-docker
。 i would like to ask ,how do you solve it?
@IssamLaradji
This is and old thread, but for anybody that encounter this problem (GPU installation of the repo inside docker) and FORCE_CUDA
doesn't work, maybe this issue can helps. With these changes, (and preventing latest pytorch and torchvision installation), I made a dockerfile which works.
Happy coding!
❓ Questions and Help
Hello,
i have a strange Problem with the Docker Image. When I build the Docker Image given the instructions in INSTALL.md and if I then try training on the coco2014 dataset with the command below I get RuntimeError: Not compiled with GPU support(nms at ./maskrcnn_benchmakr/csrc/nms.h:22)
But whhen I change the Dockerfile and comment the line
python setup.py build develop
beforeWORKDIR /maskrcnn-benchmark
out and then execute the linepython setup.py build develop
inside my built docker container i can train without problems.My Environment when running the Docker Container:
Does somebody know why this problem happens?