Code for a submersible.
If using the Nvidia Jetson or a Nvidia GPU, using the ISAAC ROS Docker Environment method is recommended. Otherwise, refer to the documentation on Google Drive.
Ensure Docker for your OS is installed. For recent versions of Windows and Docker, Nvidia GPU support should be built-in. For Linux, follow the Nvidia Docker installation instructions.
In a terminal (current directory doesn't matter), run:
./activate.sh
This will open a shell inside the Docker container with a pre-defined environment, and the entire repository mounted as /workspaces/isaac_ros-dev
. Subsequent runs of activate.sh
re-use the same container instead of creating a new one. To reset the environment, remove the container by running the following command on the host machine:
docker rm -f isaac_ros_dev-container
Note: Resetting the environment is necessary if any Dockerfile
is modified, or if CONFIG_IMAGE_KEY
inside .isaac_ros_common-config
is changed, to rebuild the container so that changes take effect.
Note: For an alternative workflow using VSCode Dev Containers (which support Python Intellisense), you can use "> Dev Containers: Attach to Running Container...", see: https://code.visualstudio.com/docs/devcontainers/attach-container.
Refer to the documentation on Google Drive.
src/
.git submodule
to thirdparty/
.
arduino/
.COLCON_IGNORE
(i.e., touch COLCON_IGNORE
) in folders containing ROS2 packages you don't want colcon to build.See respective README.md
files in each folder for more information.
arduino/
: Arduino code.docker/
: Additional image layers for the ISAAC ROS Docker Environment.src/
: First-party ROS2 packages.thirdparty/
: Third-party ROS2 packages added as git submodules.Cause the realsense viewer doesn't work on my (J-H) container for some reason, plus we now have that totally crap oversaturation issue with the D455 (see last page of https://www.intelrealsense.com/download/13629/), we are using a regular usb cam. So here's the ros2 usb cam node:
sudo apt install ros-humble-usb-cam
ros2 run usb_cam usb_cam_node_exe --ros-args -p video_device:=/dev/video4
# Video recording node
ros2 run debug_cv record_vid --ros-args -p img_topic:=/image_raw
# CV testing mode (using your webcam)
ros2 launch debug_cv test.launch.py
See literally the source code for the parameters: https://github.com/ros-drivers/usb_cam/blob/70ed391a979287bad056c9e75bad8c2001a98f2b/src/ros2/usb_cam_node.cpp#L65-L85
Surprisingly, most of the node's parameters can be changed live via rqt
, so that's a good way to figure out what does what and calibrate.