Closed cardboardcode closed 3 months ago
Been continuing to debug on this issue myself for the past week. Arrived at a short-term conclusion with a Dirty Workaround to avoid the issue as well as narrowing of the Cause of Issue.
The Cause of Issue has been narrowed down by incrementally shaving away at instantiated ROS 2 packages in common.launch.xml
under rmf_demos
. The following is a minimal common.launch.xml
file which gives the core error reported as stated above.
<?xml version='1.0' ?>
<launch>
<arg name="use_sim_time" default="false" description="Use the /clock topic for time to sync with simulation"/>
<!-- Traffic Schedule -->
<node pkg="rmf_traffic_ros2" exec="rmf_traffic_schedule" output="both" name="rmf_traffic_schedule_primary">
<param name="use_sim_time" value="$(var use_sim_time)"/>
</node>
</launch>
Comparing this to a working example of .launch.xml
that launches RMF Core in a manner that does not crash upon the launch of fleet_adapter_template
, there is no difference in the way rmf_traffic_schedule
was called.
Therefore, the only difference between Faulty Setup and Working Setup seems to :red_circle: due to the base docker image used :red_circle: .
Working Setup uses ghcr.io/open-rmf/rmf/rmf_demos:latest
when it was still based in ROS 2 Humble.
Faulty Setup uses ghcr.io/open-rmf/rmf_deployment_template/rmf-simulation:latest
which is based in ROS 2 Humble.
The rmf_traffic_schedule
used in ghcr.io/open-rmf/rmf_deployment_template/rmf-simulation:latest
, for lack of better explaination, is just different and should not be used, at least as of this writing and author's personal understanding.
To avoid the core error reported in this thread, would recommend to use ghcr.io/open-rmf/rmf/rmf_demos:latest
as a base docker image to quickly instantiate an RMF Core for development purposes.
Closing since resolving this issue may not be useful in long run, given developers are probably not going to use ghcr.io/open-rmf/rmf_deployment_template/rmf-simulation:latest
for their official deployment of an RMF Core.
Before proceeding, is there an existing issue or discussion for this?
Description
Issue Description :spiral_notepad:
In an attempt to dockerise fleet_adapter_template as well as RMF Core in order to have these 2 RMF components talk to each other from different environment, the following error is encountered:
Error Abstract :eye:
Raw
Click here for more details
[rmf_traffic_schedule-1] terminate called after throwing an instance of 'std::runtime_error' [rmf_traffic_schedule-1] what(): Invalid rmf_traffic_msgs/ScheduleQuerySpacetime type [0] [rmf_traffic_schedule-1] Stack trace (most recent call last): [rmf_traffic_schedule-1] #23 Object "", at 0xffffffffffffffff, in [rmf_traffic_schedule-1] #22 Object "/opt/rmf/install/rmf_traffic_ros2/lib/rmf_traffic_ros2/rmf_traffic_schedule", at 0x402214, in _start [rmf_traffic_schedule-1] #21 Object "/usr/lib/x86_64-linux-gnu/libc.so.6", at 0x7ef3e21b1e3f, in __libc_start_main [rmf_traffic_schedule-1] #20 Object "/usr/lib/x86_64-linux-gnu/libc.so.6", at 0x7ef3e21b1d8f, in [rmf_traffic_schedule-1] #19 Object "/opt/rmf/install/rmf_traffic_ros2/lib/rmf_traffic_ros2/rmf_traffic_schedule", at 0x402536, in main [rmf_traffic_schedule-1] #18 Object "/opt/ros/humble/lib/librclcpp.so", at 0x7ef3e26edc8e, in rclcpp::spin(std::shared_ptrSteps To Reproduce :books:
Follow the steps to recreate the error encountered:
Environment :bookmark_tabs:
22.04.4
LTSHumble
26.1.3
, buildb72abbb
/map
:test.building.yaml
:file_folder:Dockerfile
in the newly created directory/fleet_adapter_template
:Dockerfile
:file_folder:0.yaml
navigation graph file in the following directory/fleet_adapter_template/
:0.yaml
:file_folder:Expected Behaviour :green_circle:
fleet_adapter_template
is able to find RMF Schedule Core but gives error being unable to connect to connect to non-existent robot API server athttp://127.0.0.1:8080
, as specified inconfig.yaml
.Actual Behaviour :red_circle:
Upon running dockerised
fleet_adapter_template
using the steps above, dockerised RMF Core crashes with the aforementioned error.Remarks
Appreciate any constructive help/feedback on this issue. :blush: :pray:
The rationale for this form of setup is for better system scalability where an RMF Deployment would have a robot's fleet adapter running in a different environment and server from RMF Core which should be running in cloud with Dashboard and API Server. This is in contrast to the many online tutorial examples of RMF deployment which assumes fleet adapter is always running in the same environment as RMF Core.