Closed cmbroth closed 4 years ago
Okay, I added support for NVidia GPU's with the cuda
docker image. You need to download a tensorflow model.
For whatever reason it can take a while to start.
Thanks for doing this. Seems to be working just fine for me. I did get some errors due to a mismatch of the libcuda version and the kernel driver. I think this is because the nvidia image is a little behind the kernel driver releases. I added some "apt-get dist-upgrade -y" to your Dockerfile.base.cuda file and rebuilt using "make docker CONF=cuda". I did get some errors with the makefile, but they didn't seem to cause any issues.
Looks like the tensorflow project has a docker image that uses the gpu. That might be worth looking into in the future. It may save you a little time.
Does this mean the project would run on an Nvidia Jetson / Nvidia Xavier at decent speed?
Does this mean the project would run on an Nvidia Jetson / Nvidia Xavier at decent speed?
No, I don't have an ARM GPU version (yet) Getting it working reliably on x86 has proven to be a huge pain.
Nvidia provides docker images with gpu support that could be used as a base for doods:
https://github.com/NVIDIA/nvidia-docker