Closed RValner closed 3 years ago
This is intended behavior introduced by the ResponsiveWait
idle behavior.
We had recurring issues where users were keeping their robots parked in arbitrary locations, which created traffic conflicts that couldn't be resolved without moving the parked robot. As a temporary way to deal with this issue, we are having the robots perform a "responsive waiting" behavior whenever they are not performing a task. This responsive waiting should mean that they won't go anywhere unless they need to move out of the way of another robot.
a seemingly random, but always the same, location (just a "random" vertex)
The robot should stay on whatever waypoint it is starting on. Are you running a physical demo where the robot's initial location is not on any waypoint? If so, it will probably be told to go to some arbitrary nearby waypoint.
Yes, indeed the robot is going to the nearby waypoint when not located directly above one. But regarding the responsive waiting, i can see that behavior playing out nice on the tinyRobot demos, where the path request is sent after about every 15 seconds or so. But the problem in my case is that full_control
fleet adapter endlessly spamms path requests to the same waypoint as often as every second. Is that by design? Thus when I use the demo dashboard to send a looping request for example, the task gets queued and is never executed (I do see the task being registered by rmf_task_dispatcher
but nothing happens). Again, it's likely a simple configuration oversight on my part but I'm out of ideas.
full_control
fleet adapter endlessly spamms path requests to the same waypoint as often as every second. Is that by design?
I think this may indicate a bug in the free fleet. The free fleet might be reporting that it has finished its waiting prematurely (before the waiting time has finished), which triggers the fleet adapter to request another round of waiting. You are probably the first person to test the new responsive wait behavior against the free fleet, so no one had seen this unintended behavior yet. Does that seem plausible @aaronchongth?
when I use the demo dashboard to send a looping request for example, the task gets queued and is never executed
When a task comes in, the responsive waiting should stop and the task should begin. I suspect this might be an unrelated issue. My best guess is you might be using a mixture of "sim time" and wall time somewhere. The launch file you linked to sets sim_time
to true
which would be a problem if you're running a real robot instead of a gazebo simulation, because then sim_time
will be stuck at 0
forever.
My best guess is you might be using a mixture of "sim time" and wall time somewhere.
You are right. I completely neglected the matter of clock synchronization between ROS1 and ROS2. I am running turtlebot in simulation on ROS1 and I just had the use_sim_time
true across the launch files (in ROS1 and ROS2). But forgot that the clock source has to be bridged to ROS2 to make everything working with this simulation-based ROS1-ROS2 setup. This solved all the problems mentioned above.
Thanks!
Hi,
I'm integrating rmf with free_fleet and initially I just want to get the basic pipeline working.
I'm on Ubuntu 20.04 and the configuration of the demo pipeline is (launch files in the links): turtlebot_sim(ROS Noetic) -> free_fleet_client(ROS Noetic) -> free_fleet_server(ROS Foxy) -> rmf_core(ROS Foxy)
I have created a traffic map, generated navigation graph based on it and have defined the necessary transfrormations, so that rmf seems to be happy about the setup (nothing crashes, no errors or warnings are printed).
But the issue is that as soon as I launch the rmf (launch file here), the robot immediately receives a path request to a seemingly random, but always the same, location (just a "random" vertex). Once the robot reaches this location, the
full_control
fleet adapter keeps on sending new path requests viaFleetDriverRobotCommandHandle::follow_new_path
to the same location roughly every second. Additionally the path request is sent inFleetDriverRobotCommandHandle::update_state
due totask_id
from the state feedback message not being equal tocurrent_path_request.task_id
. If I printed the id's out, i got:[full_control-11] in update_state: state_task_id=14, current_task_id=15
, and in every state message thestate.task_id
is always behind by one but both keep incrementing due to new task requests.Any ideas on what is possibly causing this behaviour?