Open j3soon opened 3 months ago
Reusing a previous workspace would make management more convenient. However, some aspects might not be as ideal. For example, if we want to simulate Husky and ZED in Gazebo, our URDF file must include both. However, the files for these two will be placed in different install folders within their respective workspaces/containers, leading to import failures and other issues. You can take a look at here for more details. Unless we restructure the entire workspace to use packages for separation, I don't think this will be a good approach.
Thanks for your comment! I understand that some aspects may not be ideal, but the current copy-and-paste approach would lead to maintenance difficulties in the long run. (There may be a lot of duplicate code, such as five near-identical husky_ws
scattered throughout this repo)
How about we only copy necessary files and use workspace overlay/underlay? Something like:
zed_to_husky_ws
└── src
├── dummy_controller
├── husky_control # this may not be required due to overlaying?
├── husky_description # this contains the zed+husky URDF (maybe rename to `robot_description`?)
└── husky_gazebo # (maybe rename to `robot_gazebo`?)
and we'll source the husky_ws
, zed_ws
, zed_to_husky_ws
environment, in this order?
If this is possible, we can minimize code duplication (by reusing underlay packages) and allow code modifications (by overriding the underlay packages through overlay).
So the gazebo simulation will run in the container of zed_to_husky_ws
(while reusing packages in other workspaces, and may require install some dependencies for simulation), without requiring other containers. As for real world deployment, the container of zed_to_husky_ws
will reduce to only running the dummy_controller
package, and run the extra containers of zed_ws
and husky_ws
to interface with the real hardware.
We can configure this by using a docker compose file to extend the compose configs of zed_ws
and husky_ws
. This compose file can also distribute the containers across different hardware (for example, laptop and Jetson boards).
Sidenote: I think restructuring the current workspaces into packages may not be feasible.
I can roughly understand your idea, and I like it! Conceptually, I think it's quite similar to how the kobuki_driver_ws
was designed before. Since the method used there was feasible, I believe the current solution should also work.
However, when dealing with zed_to_husky_ws
, there might be a need to fine-tune some things in husky_ws
to make it more convenient. I'll test this out and see how to handle it better!
But first, I want to test using gazebo_world_ws
to see if it's possible to reuse parts of husky_ws
and make the Husky work in the simulated world inside gazebo_world_ws
. Since only husky_ws
has already been merged into the main branch, working on zed_to_husky_ws
first might cause dependency issues with the changes in zed_ws
. So handling gazebo_world_ws
first should be relatively simpler.
Yes, I agree with you! Please try it out in gazebo_world_ws
when you have time. Thanks!
If this code structure seems to work, we can continue working on vlp_to_kobuki_ws
and vlp_to_husky_ws
.
As for the pipelines including ZED or RealSense, we may wait until the workspaces for them are fixed and merged.
It's worth noting that we should also allow vlp_to_husky_ws
and vlp_to_kobuki_ws
to reuse the worlds in gazebo_world_ws
.
It just occurred to me that it may also be possible for husky_ws
to reuse the citysim
package in gazebo_world_ws
. Therefore reducing the duplicated copies of citysim
in this repo.
This can be addressed after the gazebo_world_ws
PR though.
Just came across the multi-machine support feature in launch files, which may be useful for running pipelines across machines. Although it is supported in ROS 1, it isn't supported in ROS 2 yet.
References:
Pipelines may require launching multiple containers across workspaces at once (potentially across machines), and would prefer if a single launch file can launch nodes across containers (and machines).
Some random thoughts:
docker compose up
(use pre-generated ssh-keys and hostnames to allow direct access)ExecuteProcess
in launch files to ssh into other containers (on host or another machine) to launch the nodes.
I'm thinking about the best approach for implementing pipelines based on existing workspaces (such as #27 and #28).
The primary goal is to ensure ease of use, while maintaining minimal code duplication.
For example, the VLP-16 to Husky pipeline could be simplified by using symbolic links and gitignore for
vlp_ws
andhusky_ws
, which will 100% reuse the code of those 2 workspaces without duplication. I don't think this is the best way though...I’m currently thinking of introducing only the required code and configs for
vlp_to_husky_ws
, where:vlp_to_husky_ws/src/dummy_controller/dummy_controller/publisher.py
publishes the commands.vlp_to_husky_ws/docker/compose.yaml
extends the compose files fromvlp_ws
andhusky_ws
.This way, we can treat
vlp_to_husky_ws
just like a normal workspace (that depends on some workspaces) with minimal code duplication. Moreover, we can easily integrate existing workspaces such asgazebo_world_ws
orisaac_sim_ws
in the future.I'm curious about @YuZhong-Chen's and @Assume-Zhan's thoughts on this. I look forward to your comments when you have time.