What have I done wrong when running https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common
on a Jetson AGX Orin DeveloperKit 35 (release), REVISION: 3.1
such that nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_33836e394da2d095a59afd2d151038f8
was not pulled and the Dockerfile.aarch64.ros2_humble doesn't seem to have done anything?
In /opt there is no ros or ros/humble
In /opt/nvidia/ I don't see anything titled Isaac.
These are my Dockerfiles names:
Dockerfile.aarch64
Dockerfile.aarch64.ros2_humble
Dockerfile.aarch64.ros2_humble.user
This is .isaac_ros_common-config:
cat ~/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/.isaac_ros_common-config
CONFIG_IMAGE_KEY="aarch64.ros2_humble.user"
CONFIG_DOCKER_SEARCH_DIRS=(workspaces/isaac_ros-dev/src/isaac_ros_common/docker)
Here are the docker images that run_dev.sh created yesterday:
REPOSITORY TAG IMAGE ID CREATED SIZE
isaac_ros_dev-aarch64 latest d71e36089b3f 23 hours ago 14.2GB
aarch64-image latest 7c4e6144d53a 23 hours ago 14.2GB
Yesterday I ran tried creating the docker images/container via ./scripts/run_dev.sh
Today after looking at the .sh I noticed that build_base_image.sh first 3 lines of script
ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
source $ROOT/utils/print_color.sh
DOCKER_DIR="${ROOT}/../docker"
will fail when ran via ./scripts/run_dev.sh because the path "${ROOT}/../docker" doesn't exist when run via ./scripts/run_dev.sh directory as directed by this Repo's README.md. The
source $ROOT/utils/print_color.sh script and
DOCKER_DIR="${ROOT}/../docker" environment variable only exist when pwd is
~/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts
and script is run via ./run_dev.sh.
Here's what run_dev.sh does after the first time it is run. I tried to attach it as a file but could not.
isaac_ros_dev not specified, assuming /home/scott/workspaces/isaac_ros-dev
Building aarch64.aarch64.ros2_humble.user base as image: isaac_ros_dev-aarch64 using key aarch64.aarch64.ros2_humble.user (the 2 aarch64 must be wrong but how to correct it?)
Stopping hotplug events dispatcher systemd-udevd [ OK ]
Starting hotplug events dispatcher systemd-udevd [ OK ]
bash: /opt/ros//setup.bash: No such file or directory
admin@chiorin:/workspaces/isaac_ros-dev$ ls /opt/ros
ls: cannot access '/opt/ros': No such file or directory
Failed first time. Then I finally deleted the 18gb images that I renamed. Ran the run_dev.sh again with a proper .isaac_ros_common-config and everything worked as designed. Sorry for wasting space and time.
What have I done wrong when running https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_common on a Jetson AGX Orin DeveloperKit 35 (release), REVISION: 3.1 such that nvcr.io/nvidia/isaac/ros:aarch64-ros2_humble_33836e394da2d095a59afd2d151038f8 was not pulled and the Dockerfile.aarch64.ros2_humble doesn't seem to have done anything?
In /opt there is no ros or ros/humble In /opt/nvidia/ I don't see anything titled Isaac.
These are my Dockerfiles names:
Dockerfile.aarch64 Dockerfile.aarch64.ros2_humble Dockerfile.aarch64.ros2_humble.user
This is .isaac_ros_common-config: cat ~/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/.isaac_ros_common-config CONFIG_IMAGE_KEY="aarch64.ros2_humble.user" CONFIG_DOCKER_SEARCH_DIRS=(workspaces/isaac_ros-dev/src/isaac_ros_common/docker)
Here are the docker images that run_dev.sh created yesterday: REPOSITORY TAG IMAGE ID CREATED SIZE isaac_ros_dev-aarch64 latest d71e36089b3f 23 hours ago 14.2GB aarch64-image latest 7c4e6144d53a 23 hours ago 14.2GB
Yesterday I ran tried creating the docker images/container via ./scripts/run_dev.sh Today after looking at the .sh I noticed that build_base_image.sh first 3 lines of script ROOT="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" source $ROOT/utils/print_color.sh DOCKER_DIR="${ROOT}/../docker" will fail when ran via ./scripts/run_dev.sh because the path "${ROOT}/../docker" doesn't exist when run via ./scripts/run_dev.sh directory as directed by this Repo's README.md. The source $ROOT/utils/print_color.sh script and DOCKER_DIR="${ROOT}/../docker" environment variable only exist when pwd is ~/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts and script is run via ./run_dev.sh.
Here's what run_dev.sh does after the first time it is run. I tried to attach it as a file but could not.
~/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts$ ./run_dev.sh
isaac_ros_dev not specified, assuming /home/scott/workspaces/isaac_ros-dev
Building aarch64.aarch64.ros2_humble.user base as image: isaac_ros_dev-aarch64 using key aarch64.aarch64.ros2_humble.user (the 2 aarch64 must be wrong but how to correct it?)
Using configured docker search paths: /home/scott/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/workspaces/isaac_ros-dev/src/isaac_ros_common/docker /home/scott/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/../docker (the first path is obviously wrong, but how to correct it?) Using base image name not specified, using '' Using docker context dir not specified, using Dockerfile directory Resolved the following Dockerfiles for target image: aarch64.aarch64.ros2_humble.user (wrong) /home/scott/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/../docker/Dockerfile.aarch64.ros2_humble.user /home/scott/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/../docker/Dockerfile.aarch64 Building /home/scott/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/../docker/Dockerfile.aarch64 as image: aarch64-image with base: [+] Building 1.4s (31/31) FINISHED
=> [internal] load build definition from Dockerfile.aarch64 0.0s => => transferring dockerfile: 7.40kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for nvcr.io/nvidia/l4t-base:r35.2.1 1.1s => [internal] load build context 0.0s => => transferring context: 37.59kB 0.0s => [ 1/26] FROM nvcr.io/nvidia/l4t-base:r35.2.1@sha256:87c9ddd9502528bed2acbe4807bd465445c33605299b44ed293eefd99112a2f4 0.0s => CACHED [ 2/26] RUN apt-get update && apt-get install -y build-essential curl git iputils-ping lsb-release sudo ta 0.0s => CACHED [ 3/26] RUN wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | sudo tee /usr/share/keyr 0.0s => CACHED [ 4/26] RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 1 0.0s => CACHED [ 5/26] RUN apt-get update && apt-get install -y python3-dev python3-flake8 python3-pip python3-pytest 0.0s => CACHED [ 6/26] RUN apt-get update && apt-get install -y libavformat-dev libjpeg-dev libopenjp2-7-dev libpng-dev libpq-dev 0.0s => CACHED [ 7/26] RUN apt-get update && apt-get install -y python3-distutils libboost-all-dev libboost-dev libpcl-dev libode 0.0s => CACHED [ 8/26] RUN apt-get update && apt-get install -y gfortran libatlas-base-dev python3-scipy && rm -rf /var/lib/apt/lists/ & 0.0s => CACHED [ 9/26] RUN python3 -m pip install -U Cython wheel 0.0s => CACHED [10/26] RUN python3 -m pip install -U scikit-learn 0.0s => CACHED [11/26] RUN curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash && apt-get update && apt- 0.0s => CACHED [12/26] RUN apt-get update && apt-get install -y tensorrt vpi2-dev && rm -rf /var/lib/apt/lists/ && apt-get clean 0.0s => CACHED [13/26] COPY tao/tao-converter-aarch64-tensorrt8.4.zip /opt/nvidia/tao/tao-converter-aarch64-tensorrt8.4.zip 0.0s => CACHED [14/26] RUN mkdir -p /opt/nvidia/tao && cd /opt/nvidia/tao && unzip -j tao-converter-aarch64-tensorrt8.4.zip -d /opt/nvidia/ta 0.0s => CACHED [15/26] RUN python3 -m pip install --no-cache https://developer.download.nvidia.cn/compute/redist/jp/v50/pytorch/torch-1.13.0a 0.0s => CACHED [16/26] RUN apt-get update && apt-get install -y --no-install-recommends autoconf automake libb64-dev libcurl4-openssl 0.0s => CACHED [17/26] RUN mkdir -p /opt/tritonserver && cd /opt/tritonserver && wget https://github.com/triton-inference-server/server/releases/ 0.0s => CACHED [18/26] RUN apt-add-repository ppa:mosquitto-dev/mosquitto-ppa && apt-get update && apt-get install -y mosquitto m 0.0s => CACHED [19/26] RUN python3 -m pip install -U pymongo paho-mqtt 0.0s => CACHED [20/26] RUN apt-get update && apt-get install -y libasio-dev libbullet-dev libtinyxml2-dev libcunit1-d 0.0s => CACHED [21/26] RUN apt-get update && apt-get install -y --no-install-recommends libnpp-dev-11-4 && rm -rf /var/lib/apt/lists/ && apt 0.0s => CACHED [22/26] RUN apt-get update && apt-get install -y --only-upgrade linux-libc-dev && rm -rf /var/lib/apt/lists/ && apt-get clean 0.0s => CACHED [23/26] RUN python3 -m pip install protobuf==3.20.1 0.0s => CACHED [24/26] RUN python3 -m pip install -U jetson-stats 0.0s => CACHED [25/26] COPY patches/libcudacxx_aarch64_cuda_11_4.diff /tmp/ 0.0s => CACHED [26/26] RUN patch -i /tmp/libcudacxx_aarch64_cuda_11_4.diff /usr/local/cuda-11.4/targets/aarch64-linux/include/cuda/std/detail/libcxx/ 0.0s => exporting to image 0.1s => => exporting layers 0.0s => => writing image sha256:7c4e6144d53ab017404e6d89dd1902888810f74de2901ffea869fda76ba2c484 0.0s => => naming to docker.io/library/aarch64-image 0.0s Building /home/scott/workspaces/isaac_ros-dev/src/isaac_ros_common/scripts/../docker/Dockerfile.aarch64.ros2_humble.user as image: isaac_ros_dev-aarch64 with base: aarch64-image [+] Building 0.2s (15/15) FINISHED
=> [internal] load build definition from Dockerfile.aarch64.ros2_humble.user 0.0s => => transferring dockerfile: 1.98kB 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/aarch64-image:latest 0.0s => [ 1/10] FROM docker.io/library/aarch64-image 0.0s => [internal] load build context 0.0s => => transferring context: 1.97kB 0.0s => CACHED [ 2/10] RUN apt-get update && apt-get install -y sudo udev && rm -rf /var/lib/apt/lists/ && apt-get clean 0.0s => CACHED [ 3/10] RUN if [ $(getent group triton-server) ]; then groupmod -o --gid 1000 -n admin triton-server ; usermod -l admi 0.0s => CACHED [ 4/10] RUN if [ ! $(getent passwd admin) ]; then groupadd --gid 1000 admin ; useradd --uid 1000 --gid 1000 -m admin ; 0.0s => CACHED [ 5/10] RUN echo admin ALL=(root) NOPASSWD:ALL > /etc/sudoers.d/admin && chmod 0440 /etc/sudoers.d/admin && adduser admin vide 0.0s => CACHED [ 6/10] RUN mkdir -p /usr/local/bin/scripts 0.0s => CACHED [ 7/10] COPY scripts/entrypoint.sh /usr/local/bin/scripts/ 0.0s => CACHED [ 8/10] RUN chmod +x /usr/local/bin/scripts/.sh 0.0s => CACHED [ 9/10] RUN mkdir -p /usr/local/share/middleware_profiles 0.0s => CACHED [10/10] COPY middleware_profiles/profile.xml /usr/local/share/middleware_profiles/ 0.0s => exporting to image 0.1s => => exporting layers 0.0s => => writing image sha256:d71e36089b3fc7e0e89bff74b718d088efe2aeacf571880dd5dbdb4c666f652d 0.0s => => naming to docker.io/library/isaac_ros_dev-aarch64 0.0s Running isaac_ros_dev-aarch64-container /usr/local/bin/scripts/workspace-entrypoint.sh: line 13: /opt/ros//setup.bash: No such file or directory Get:1 https://repo.download.nvidia.com/jetson/common r35.2 InRelease [2555 B] Get:3 https://repo.download.nvidia.com/jetson/common r35.2/main arm64 Packages [19.8 kB]
Get:4 http://ports.ubuntu.com/ubuntu-ports focal InRelease [265 kB]
Get:5 http://ppa.launchpad.net/mosquitto-dev/mosquitto-ppa/ubuntu focal InRelease [23.8 kB]
Get:2 https://packagecloud.io/github/git-lfs/ubuntu focal InRelease [24.4 kB]
Get:6 https://apt.kitware.com/ubuntu focal InRelease [15.5 kB]
Get:7 https://apt.kitware.com/ubuntu focal/main arm64 Packages [52.2 kB]
Get:8 http://ppa.launchpad.net/mosquitto-dev/mosquitto-ppa/ubuntu focal/main arm64 Packages [2597 B] Get:9 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease [114 kB] Get:10 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease [108 kB] Get:11 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease [114 kB] Get:12 http://ports.ubuntu.com/ubuntu-ports focal/main arm64 Packages [1234 kB] Get:13 http://ports.ubuntu.com/ubuntu-ports focal/multiverse arm64 Packages [139 kB] Get:14 http://ports.ubuntu.com/ubuntu-ports focal/universe arm64 Packages [11.1 MB] Get:15 http://ports.ubuntu.com/ubuntu-ports focal/restricted arm64 Packages [1317 B] Get:16 http://ports.ubuntu.com/ubuntu-ports focal-updates/multiverse arm64 Packages [9076 B] Get:17 http://ports.ubuntu.com/ubuntu-ports focal-updates/restricted arm64 Packages [21.9 kB] Get:18 http://ports.ubuntu.com/ubuntu-ports focal-updates/main arm64 Packages [2479 kB] Get:19 http://ports.ubuntu.com/ubuntu-ports focal-updates/universe arm64 Packages [1260 kB] Get:20 http://ports.ubuntu.com/ubuntu-ports focal-backports/universe arm64 Packages [27.8 kB] Get:21 http://ports.ubuntu.com/ubuntu-ports focal-backports/main arm64 Packages [54.8 kB] Get:22 http://ports.ubuntu.com/ubuntu-ports focal-security/universe arm64 Packages [959 kB] Get:23 http://ports.ubuntu.com/ubuntu-ports focal-security/restricted arm64 Packages [5214 B] Get:24 http://ports.ubuntu.com/ubuntu-ports focal-security/multiverse arm64 Packages [3260 B] Get:25 http://ports.ubuntu.com/ubuntu-ports focal-security/main arm64 Packages [2091 kB] Fetched 20.2 MB in 5s (4015 kB/s)
Reading package lists... Done /usr/local/bin/scripts/workspace-entrypoint.sh: line 16: rosdep: command not found