NVIDIA-ISAAC-ROS / isaac_ros_common

Common utilities, packages, scripts, Dockerfiles, and testing infrastructure for Isaac ROS packages.
https://developer.nvidia.com/isaac-ros-gems
Other
183 stars 125 forks source link

Error when running docker #1

Closed kaganerunsal closed 2 years ago

kaganerunsal commented 2 years ago

Hello, I am trying to use ISAAC ROS environment with my Jetson Xavier NX board (Jetpack 4.6.1 L4T 32.6). My ultimate aim is to run HW accelerated April Tag detection code.

***When I run the script with sudo ./run_dev.sh Dockers builds successfully but I am getting the error below when it tries to create group id.

'gid '0' already exists'

***If I comment out this part (Line 188-192) from Dockerfile.aarch64.base and run docker without user "admin" (to get rid of above error), I am getting the long error below:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: src: /usr/lib/aarch64-linux-gnu/libcudnn.so.8, src_lnk: libcudnn.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn.so.8, dst_lnk: libcudnn.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn.so, src_lnk: /etc/alternatives/libcudnn_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn.so, dst_lnk: /etc/alternatives/libcudnn_so src: /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8, src_lnk: libcudnn_ops_infer.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8, dst_lnk: libcudnn_ops_infer.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so, src_lnk: /etc/alternatives/libcudnn_ops_infer_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so, dst_lnk: /etc/alternatives/libcudnn_ops_infer_so src: /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8, src_lnk: libcudnn_ops_train.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8, dst_lnk: libcudnn_ops_train.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so, src_lnk: /etc/alternatives/libcudnn_ops_train_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so, dst_lnk: /etc/alternatives/libcudnn_ops_train_so src: /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8, src_lnk: libcudnn_adv_infer.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8, dst_lnk: libcudnn_adv_infer.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so, src_lnk: /etc/alternatives/libcudnn_adv_infer_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so, dst_lnk: /etc/alternatives/libcudnn_adv_infer_so src: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8, src_lnk: libcudnn_cnn_infer.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8, dst_lnk: libcudnn_cnn_infer.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so, src_lnk: /etc/alternatives/libcudnn_cnn_infer_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so, dst_lnk: /etc/alternatives/libcudnn_cnn_infer_so src: /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8, src_lnk: libcudnn_adv_train.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8, dst_lnk: libcudnn_adv_train.so.8.2.1 src: /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so, src_lnk: /etc/alternatives/libcudnn_adv_train_so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so, dst_lnk: /etc/alternatives/libcudnn_adv_train_so src: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8, src_lnk: libcudnn_cnn_train.so.8.2.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8, dst_lnk: libcudnn_cnn_train.so.8.2.1 src: /usr/include/cudnn_adv_infer.h, src_lnk: /etc/alternatives/cudnn_adv_infer_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_adv_infer.h, dst_lnk: /etc/alternatives/cudnn_adv_infer_h src: /usr/include/cudnn_adv_train.h, src_lnk: /etc/alternatives/cudnn_adv_train_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_adv_train.h, dst_lnk: /etc/alternatives/cudnn_adv_train_h src: /usr/include/cudnn_backend.h, src_lnk: /etc/alternatives/cudnn_backend_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_backend.h, dst_lnk: /etc/alternatives/cudnn_backend_h src: /usr/include/cudnn_cnn_infer.h, src_lnk: /etc/alternatives/cudnn_cnn_infer_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_cnn_infer.h, dst_lnk: /etc/alternatives/cudnn_cnn_infer_h src: /usr/include/cudnn_cnn_train.h, src_lnk: /etc/alternatives/cudnn_cnn_train_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_cnn_train.h, dst_lnk: /etc/alternatives/cudnn_cnn_train_h src: /usr/include/cudnn.h, src_lnk: /etc/alternatives/libcudnn, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn.h, dst_lnk: /etc/alternatives/libcudnn src: /usr/include/cudnn_ops_infer.h, src_lnk: /etc/alternatives/cudnn_ops_infer_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_ops_infer.h, dst_lnk: /etc/alternatives/cudnn_ops_infer_h src: /usr/include/cudnn_ops_train.h, src_lnk: /etc/alternatives/cudnn_ops_train_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_ops_train.h, dst_lnk: /etc/alternatives/cudnn_ops_train_h src: /usr/include/cudnn_version.h, src_lnk: /etc/alternatives/cudnn_version_h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/include/cudnn_version.h, dst_lnk: /etc/alternatives/cudnn_version_h src: /etc/alternatives/libcudnn, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_v8.h src: /etc/alternatives/libcudnn_adv_infer_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_adv_infer_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8 src: /etc/alternatives/libcudnn_adv_train_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_adv_train_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8 src: /etc/alternatives/libcudnn_cnn_infer_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_cnn_infer_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8 src: /etc/alternatives/libcudnn_cnn_train_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_cnn_train_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8 src: /etc/alternatives/libcudnn_ops_infer_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_ops_infer_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8 src: /etc/alternatives/libcudnn_ops_train_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_ops_train_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8 src: /etc/alternatives/libcudnn_so, src_lnk: /usr/lib/aarch64-linux-gnu/libcudnn.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/libcudnn_so, dst_lnk: /usr/lib/aarch64-linux-gnu/libcudnn.so.8 src: /etc/alternatives/cudnn_adv_infer_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_adv_infer_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_adv_infer_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_adv_infer_v8.h src: /etc/alternatives/cudnn_backend_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_backend_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_backend_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_backend_v8.h src: /etc/alternatives/cudnn_cnn_train_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_cnn_train_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_cnn_train_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_cnn_train_v8.h src: /etc/alternatives/cudnn_ops_train_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_ops_train_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_ops_train_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_ops_train_v8.h src: /etc/alternatives/cudnn_adv_train_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_adv_train_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_adv_train_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_adv_train_v8.h src: /etc/alternatives/cudnn_cnn_infer_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_cnn_infer_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_cnn_infer_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_cnn_infer_v8.h src: /etc/alternatives/cudnn_ops_infer_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_ops_infer_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_ops_infer_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_ops_infer_v8.h src: /etc/alternatives/cudnn_version_h, src_lnk: /usr/include/aarch64-linux-gnu/cudnn_version_v8.h, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/alternatives/cudnn_version_h, dst_lnk: /usr/include/aarch64-linux-gnu/cudnn_version_v8.h src: /usr/lib/aarch64-linux-gnu/libcudnn_static.a, src_lnk: /etc/alternatives/libcudnn_stlib, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcudnn_static.a, dst_lnk: /etc/alternatives/libcudnn_stlib src: /usr/lib/libvisionworks_sfm.so, src_lnk: libvisionworks_sfm.so.0.90, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/libvisionworks_sfm.so, dst_lnk: libvisionworks_sfm.so.0.90 src: /usr/lib/libvisionworks_sfm.so.0.90, src_lnk: libvisionworks_sfm.so.0.90.4, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/libvisionworks_sfm.so.0.90, dst_lnk: libvisionworks_sfm.so.0.90.4 src: /usr/lib/libvisionworks.so, src_lnk: libvisionworks.so.1.6, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/libvisionworks.so, dst_lnk: libvisionworks.so.1.6 src: /usr/lib/libvisionworks_tracking.so, src_lnk: libvisionworks_tracking.so.0.88, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/libvisionworks_tracking.so, dst_lnk: libvisionworks_tracking.so.0.88 src: /usr/lib/libvisionworks_tracking.so.0.88, src_lnk: libvisionworks_tracking.so.0.88.2, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/libvisionworks_tracking.so.0.88, dst_lnk: libvisionworks_tracking.so.0.88.2 src: /usr/lib/aarch64-linux-gnu/libnvinfer.so.8, src_lnk: libnvinfer.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvinfer.so.8, dst_lnk: libnvinfer.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8, src_lnk: libnvinfer_plugin.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8, dst_lnk: libnvinfer_plugin.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvparsers.so.8, src_lnk: libnvparsers.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvparsers.so.8, dst_lnk: libnvparsers.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8, src_lnk: libnvonnxparser.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8, dst_lnk: libnvonnxparser.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvinfer.so, src_lnk: libnvinfer.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvinfer.so, dst_lnk: libnvinfer.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so, src_lnk: libnvinfer_plugin.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so, dst_lnk: libnvinfer_plugin.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvparsers.so, src_lnk: libnvparsers.so.8.0.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvparsers.so, dst_lnk: libnvparsers.so.8.0.1 src: /usr/lib/aarch64-linux-gnu/libnvonnxparser.so, src_lnk: libnvonnxparser.so.8, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libnvonnxparser.so, dst_lnk: libnvonnxparser.so.8 src: /etc/vulkan/icd.d/nvidia_icd.json, src_lnk: /usr/lib/aarch64-linux-gnu/tegra/nvidia_icd.json, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/etc/vulkan/icd.d/nvidia_icd.json, dst_lnk: /usr/lib/aarch64-linux-gnu/tegra/nvidia_icd.json src: /usr/lib/aarch64-linux-gnu/libcuda.so, src_lnk: tegra/libcuda.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libcuda.so, dst_lnk: tegra/libcuda.so src: /usr/lib/aarch64-linux-gnu/libdrm_nvdc.so, src_lnk: tegra/libdrm.so.2, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libdrm_nvdc.so, dst_lnk: tegra/libdrm.so.2 src: /usr/lib/aarch64-linux-gnu/libv4l2.so.0.0.999999, src_lnk: tegra/libnvv4l2.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4l2.so.0.0.999999, dst_lnk: tegra/libnvv4l2.so src: /usr/lib/aarch64-linux-gnu/libv4lconvert.so.0.0.999999, src_lnk: tegra/libnvv4lconvert.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4lconvert.so.0.0.999999, dst_lnk: tegra/libnvv4lconvert.so src: /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvargus.so, src_lnk: ../../../tegra/libv4l2_nvargus.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvargus.so, dst_lnk: ../../../tegra/libv4l2_nvargus.so src: /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvcuvidvideocodec.so, src_lnk: ../../../tegra/libv4l2_nvcuvidvideocodec.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvcuvidvideocodec.so, dst_lnk: ../../../tegra/libv4l2_nvcuvidvideocodec.so src: /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvidconv.so, src_lnk: ../../../tegra/libv4l2_nvvidconv.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvidconv.so, dst_lnk: ../../../tegra/libv4l2_nvvidconv.so src: /usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvideocodec.so, src_lnk: ../../../tegra/libv4l2_nvvideocodec.so, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libv4l/plugins/nv/libv4l2_nvvideocodec.so, dst_lnk: ../../../tegra/libv4l2_nvvideocodec.so src: /usr/lib/aarch64-linux-gnu/libvulkan.so.1.2.141, src_lnk: tegra/libvulkan.so.1.2.141, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/libvulkan.so.1.2.141, dst_lnk: tegra/libvulkan.so.1.2.141 src: /usr/lib/aarch64-linux-gnu/tegra/libcuda.so, src_lnk: libcuda.so.1.1, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libcuda.so, dst_lnk: libcuda.so.1.1 src: /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so, src_lnk: libnvbufsurface.so.1.0.0, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so, dst_lnk: libnvbufsurface.so.1.0.0 src: /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so, src_lnk: libnvbufsurftransform.so.1.0.0, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so, dst_lnk: libnvbufsurftransform.so.1.0.0 src: /usr/lib/aarch64-linux-gnu/tegra/libnvbuf_utils.so, src_lnk: libnvbuf_utils.so.1.0.0, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libnvbuf_utils.so, dst_lnk: libnvbuf_utils.so.1.0.0 src: /usr/lib/aarch64-linux-gnu/tegra/libnvdsbufferpool.so, src_lnk: libnvdsbufferpool.so.1.0.0, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libnvdsbufferpool.so, dst_lnk: libnvdsbufferpool.so.1.0.0 src: /usr/lib/aarch64-linux-gnu/tegra/libnvid_mapper.so, src_lnk: libnvid_mapper.so.1.0.0, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/lib/aarch64-linux-gnu/tegra/libnvid_mapper.so, dst_lnk: libnvid_mapper.so.1.0.0 src: /usr/share/glvnd/egl_vendor.d/10_nvidia.json, src_lnk: ../../../lib/aarch64-linux-gnu/tegra-egl/nvidia.json, dst: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/usr/share/glvnd/egl_vendor.d/10_nvidia.json, dst_lnk: ../../../lib/aarch64-linux-gnu/tegra-egl/nvidia.json , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --compat32 --graphics --utility --video --display --pid=10082 /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged] nvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/a5782b5dd77014733b7e5145c0f7ed3c0d25694ea6790aee64255610c17d94e0/merged/dev/nvhost-nvdla0: cannot allocate memory: unknown. ~/Documents/kagan/code/Docker/docker_ws/isaac_ros_common/scripts

***All my nvidia container libraries seem up to date apt list --installed | grep nvidia corresponding to Jetpack 4.6:

libnvidia-container-tools/stable,now 0.10.0+jetpack arm64 [installed] libnvidia-container0/stable,now 0.10.0+jetpack arm64 [installed] nvidia-container-csv-cuda/stable,now 10.2.460-1 arm64 [installed] nvidia-container-csv-cudnn/stable,now 8.2.1.32-1+cuda10.2 arm64 [installed] nvidia-container-csv-tensorrt/stable,now 8.0.1.6-1+cuda10.2 arm64 [installed] nvidia-container-csv-visionworks/stable,now 1.6.0.501 arm64 [installed] nvidia-container-runtime/stable,now 3.1.0-1 arm64 [installed] nvidia-container-toolkit/stable,now 1.0.1-1 arm64 [installed] nvidia-docker2/stable,now 2.2.0-1 all [installed] nvidia-l4t-3d-core/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-apt-source/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-bootloader/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-camera/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-configs/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-core/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-cuda/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-firmware/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-gputools/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-graphics-demos/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-gstreamer/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-init/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-initrd/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-jetson-io/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-jetson-multimedia-api/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-kernel/stable,now 4.9.253-tegra-32.6.1-20210726122859 arm64 [installed] nvidia-l4t-kernel-dtbs/stable,now 4.9.253-tegra-32.6.1-20210726122859 arm64 [installed] nvidia-l4t-kernel-headers/stable,now 4.9.253-tegra-32.6.1-20210726122859 arm64 [installed] nvidia-l4t-libvulkan/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-multimedia/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-multimedia-utils/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-oem-config/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-tools/stable,now 32.6.1-20210726122859 arm64 [installed] nvidia-l4t-wayland/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-weston/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-x11/stable,now 32.6.1-20210916210945 arm64 [installed] nvidia-l4t-xusb-firmware/stable,now 32.6.1-20210726122859 arm64 [installed]

In general Docker (v19.03.15) runs correctly with 'sudo docker run hello-world'

I am also able to run my first Nvidia container here without any problem https://developer.nvidia.com/embedded/learn/tutorials/jetson-container

I am really stuck at this point )and really appreciate any solution that you can provide.

Best, Kagan

hemalshahNV commented 2 years ago

The run_dev.sh script should not be run with sudo. The script sets up an admin user in the container using your user id and group which does nothing work when running as root under sudo. We’ll add a check in the script to prevent this and provide guidance. Your user needs to be in the docker group to not require sudo when running docker client commands. Run ‘ sudo usermod -aG docker $USER’ from here: https://docs.docker.com/engine/install/linux-postinstall/ to update that.

kaganerunsal commented 2 years ago

Hello. thank you very much for the feedback. I did what you suggested, basically:

  1. Removed the containersdocker rm -vf $(docker ps -a -q)
  2. Removed the images docker rmi -f $(docker images -a -q)
  3. Added group sudo groupadd docker
  4. Added user to docker group sudo usermod -aG docker $USER
  5. Run script without sudo ./run_dev.sh to build it

I have attached the output of the build and at the end, you can see the same error I have encountered.

output.txt

I am still suspecting it might be related with nvidia-container libraries. But all my libraries are up to date. Do you have any other suggestion? Thank you in advance.

Note: It builds successfully but have an error when it runs docker.

hemalshahNV commented 2 years ago

This error seems very different, you are right. Did you happen to run out of memory?


From: Liquidesigner @.> Sent: Monday, November 1, 2021 10:09:29 AM To: NVIDIA-ISAAC-ROS/isaac_ros_common @.> Cc: Hemal Shah @.>; Comment @.> Subject: Re: [NVIDIA-ISAAC-ROS/isaac_ros_common] Error when running docker (Issue #1)

Hello. thank you very much for the feedback. I did what you suggested, basically:

  1. Removed the containersdocker rm -vf $(docker ps -a -q)
  2. Removed the images docker rmi -f $(docker images -a -q)
  3. Added group sudo groupadd docker
  4. Added user to docker group sudo usermod -aG docker $USER
  5. Run script without sudo ./run_dev.sh to build it

I have attached the output of the build and at the end, you can see the same error I have encountered.

output.txthttps://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/files/7454970/output.txt

I am still suspecting it might be related with nvidia-container libraries. But all my libraries are up to date. Do you have any other suggestion? Thank you in advance.

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common/issues/1#issuecomment-956417227, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ASS44RWXEQQC6R32ZDLHZ6DUJ3CUTANCNFSM5G7HBTEA. Triage notifications on the go with GitHub Mobile for iOShttps://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Androidhttps://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

kaganerunsal commented 2 years ago

I think I have enough memory and drive. Here is my jtop output while running building docke

Jtop r:

hemalshahNV commented 2 years ago

The log spew after "Running isaac_ros_dev-aarch64-container" indicates that the nvidia container runtime could have run out of swap memory perhaps which is set to ~4GB max. This Xavier NX is running JP 4.6, so nothing unusual there and we've tested on Xavier NX and have not run into this yet. Anything else you can tell us about your setup you think could be relevant, including anything after flashing to JetPack 4.6?

Running isaac_ros_dev-aarch64-container
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: src: /usr/lib/aarch64-linux-gnu/libcudnn.so.8, src_lnk: libcudnn.so.8.2.1, ... , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --compat32 --graphics --utility --video --display --pid=13679 /var/lib/docker/overlay2/3809d20985900c3386f4edc47c970eea0d66370dcd769db4871635145a20e776/merged]
nvidia-container-cli: mount error: file creation failed: /var/lib/docker/overlay2/3809d20985900c3386f4edc47c970eea0d66370dcd769db4871635145a20e776/merged/dev/nvhost-nvdla0: cannot allocate memory: unknown.
kaganerunsal commented 2 years ago

I flashed Jetpack 4.6 by using SDK manager, possibly just after i flashed it, I updated the repo withsudo apt update and upgrade the packages sudo apt-get upgrade. However since default repos exist on apt, upgrade should not create a problem.

FPSychotic commented 2 years ago

i get this error, relevant info is Im in 20.04,Xavier NX, opencv 4.4 with cuda.

please , is it possible install this without docker?

error: Sending build context to Docker daemon 22.02kB Step 1/38 : ARG BASE_IMAGE="dustynv/ros:foxy-ros-base-l4t-r32.6.1" Step 2/38 : FROM ${BASE_IMAGE} foxy-ros-base-l4t-r32.6.1: Pulling from dustynv/ros b9bb7af7248f: Pulling fs layer 334a570c08f5: Pulling fs layer c8bd4a89e71c: Pulling fs layer 0a8ce4f08307: Pulling fs layer 2e163bfa50fc: Pulling fs layer 8d6d8c86a148: Pull complete b483bad8509f: Pull complete 8da3c6b29858: Pull complete a3e94453e320: Pull complete 40ad1f92affd: Pull complete 8ac6e2f9e2a0: Pull complete cc932e62bbab: Pull complete d77841654f58: Pull complete 2bb42f0b6424: Pull complete 954e18eab77a: Pull complete facc6f8bcc9b: Pull complete d4b32f371445: Pull complete 8abe66ee445c: Pull complete 2bc2b7a54d18: Pull complete c1a957de25c3: Pull complete 4811d7ce2890: Pull complete 7c40556846ce: Pull complete e00c9dd19f7d: Pull complete d4ab5f1a7659: Pull complete caa4bb4a1a92: Pull complete 92aa50f1eff2: Pull complete 44c4917d20dd: Pull complete eb14fae7b412: Pull complete Digest: sha256:3271b330732647c65f178980b5e6437d2b9ea03627ac0104e0d78ebb79cd55bc Status: Downloaded newer image for dustynv/ros:foxy-ros-base-l4t-r32.6.1 ---> bc3b88966255 Step 3/38 : ENV DEBIAN_FRONTEND=noninteractive ---> Running in c5208e12b3c1 Removing intermediate container c5208e12b3c1 ---> cdd86af29294 Step 4/38 : RUN apt-get update && apt-get install -y build-essential cmake curl git lsb-release sudo tar unzip vim wget software-properties-common && rm -rf /var/lib/apt/lists/* ---> Running in eacafa9d00af failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #1:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: driver error: failed to process request: unknown ~/isaac_ros_common-main/scripts

hemalshahNV commented 2 years ago

What is it that you would like to use without Docker? Depending on the Isaac ROS package, the Docker container and the run_dev.sh script are optional. This error looks like the Nvidia Container Toolkit is not setup properly (configured for you in JP 4.6). We have only qualified Isaac ROS (packages and Docker containers) on Jetpack 4.6 (based on Ubuntu 18.04), so there will be issues on Ubuntu 20.04.

FPSychotic commented 2 years ago

What is it that you would like to use without Docker? Depending on the Isaac ROS package, the Docker container and the run_dev.sh script are optional. This error looks like the Nvidia Container Toolkit is not setup properly (configured for you in JP 4.6). We have only qualified Isaac ROS (packages and Docker containers) on Jetpack 4.6 (based on Ubuntu 18.04), so there will be issues on Ubuntu 20.04.

I would like can control what I use, docker do not fit to my application, and it is a limitating thing. The good of linux is can compile and apt, why force to use containers? In fact it is not working, and now I need depend of nvida, ubuntu, the dependencies maintainers and now of docker and the use that nvidia want ro do of it. Thanks, I'm already suffering L4T, please don't make me suffer docker too, and mainly because don't have sense. Please add the way and guide to compile from source everything, it is what everyone does

hemalshahNV commented 2 years ago

I would like can control what I use, docker do not fit to my application, and it is a limitating thing. The good of linux is can compile and apt, why force to use containers? In fact it is not working, and now I need depend of nvida, ubuntu, the dependencies maintainers and now of docker and the use that nvidia want ro do of it. Thanks, I'm already suffering L4T, please don't make me suffer docker too, and mainly because don't have sense. Please add the way and guide to compile from source everything, it is what everyone does

I understand Docker does not fit your application. We use a Docker-based development environment but we had to also adopt a Docker container for runtime as well to make it as plug-and-play as possible for our users. Docker containers let us assume more about the environment you're running on so we can qualify our packages more effectively. Triton is very sensitive to the versions of TensorRT, cuDNN, etc. and on x86 where there is no equivalent of Jetpack, we could not reasonably expect our users to solve that 3-SAT problem easily. On aarch64, we have the right versions of everything but we wanted to helps users avoid compiling ROS2 from source because precompiled binaries are not available. With the Nvidia Container Toolkit, Docker has minimal overhead and has many advantages for production robots (complete builds, easy deployment, fast upgrades).

Of course, if you want to run without the Docker containers, you can try. On Jetpack, it should be easy to apply to the host machine what we have in Dockerfile.aarch64.base, but x86 may be more difficult to get right. We'll look for ways to help re-enable users running "bare metal," but for now, this was the best solution so users could run without tricky setup.

FPSychotic commented 2 years ago

Thanks by your kind and reasonable explanation. I fixed my problem being able to get what I need, thanks by your time and effort. You al did .a hard job in this, I recognise that and appreciate it

kaganerunsal commented 2 years ago

Hi hemalshahNV,

I managed to successfully build the docker with a clean installation of Jetpack 4.6 image again. With my Jetson Xavier NX, I am using e-CAM50_CUNX camera from e-con Systems. I have discovered that the drivers of this camera is compatible with Jetpack 4.5 and after I installed the binaries, i could not run the docker built. Possibly there are some downgrades of nvidia container libraries. I asked them to send me Jetpack 4.6 version of camera drivers. Currently, i am using another camera, arducam IMX219, which is natively supported, no need to install any drivers.