Closed will-44 closed 5 months ago
+1 on this. The error message is thrown when running the example launchfile roslaunch nvblox_ros nvblox_ros_panopt.launch rviz:=true
inside docker.
I also got a warning previously, when first running the container, just after it finished building it:
WARNING: The NVIDIA Driver was not detected. GPU functionality will not be available. Use the NVIDIA Container Toolkit to start this container with GPU support; see https://docs.nvidia.com/datacenter/cloud-native/ .
From which I understand the container doesn't have access to the NVIDIA driver. I have them, by the way, on the host machine and nvidia-smi works just fine there.
As stated in the warning message, installing and configuring the NVIDIA Container Toolkit fixes the issue.
You also need to add the options --runtime=nvidia --gpus all
in the run command at the end of the run_docker.sh
file.
Thank you for your help ! It works now !
(I just had to remove --runtime=nvidia
otherwise I had this error : docker: Error response from daemon: unknown or invalid runtime name: nvidia.
)
Hello !
I'm currently encountering an issue while trying to utilize your package within a Docker environment, specifically related to CUDA. The error message I'm receiving is as follows:
CUDA error = 35 at /root/nvblox_ws/src/nvblox_ros1/nvblox/nvblox/include/nvblox/core/internal/impl/unified_ptr_impl.h:48 'cudaMallocHost(&cuda_ptr, sizeof(T))'. Error string: CUDA driver version is insufficient for CUDA runtime version.
Upon checking the CUDA version in my Docker container, it appears to be 11.8:
While my CUDA version on the laptop matches (also 11.8 and nvidia driver 520), I'm running Ubuntu 20.04 on an x86 architecture.
Do you have any idea about this issue ? Thank you very much for your help !