Open land007 opened 4 years ago
/usr/local/cuda
is mounted from the host device in read-only mode, so you can't write to it from within the containers.
For more info, please see here: https://github.com/NVIDIA/nvidia-docker/wiki/NVIDIA-Container-Runtime-on-Jetson#mount-plugins
Currently I have the following version environment -library:
It is currently not supported to use a different version of CUDA/ect than what comes with JetPack-L4T.
However you could try making your own base image and install the CUDA packages inside your container. What you should do is move/rename /etc/nvidia-container-runtime/host-files-for-container.d/cuda.csv
on your host device first (and cudnn.csv
, tensorrt.csv
), so those CUDA files don't get mounted into your container.
Thanks for your reply, it does not seem easy. The production of nvcr.io/nvidia/l4t-base does not seem to be in this project. I am not sure if the host cuda driver works for the low version cuda calls in Docker.
Dockerfile FROM nvcr.io/nvidia/l4t-ml:r32.4.3-py3 Step 16/19 : RUN cd /usr/local/cuda/targets/aarch64-linux/lib && mkdir 123 ---> Running in d0f4ef0fa4c4 mkdir: cannot create directory '123': Read-only file system