Closed PedromsLeal closed 2 years ago
Sorry, it's not a Slam_toolbox issue.
Still, for Jetson/RPi users: The problem was solved after checking top and verifying that the ICP node was using 100% of CPU. Reducing the Intelrealsense's D455 frame rate in the wrapper's configuration file reduced it's CPU usage to around 75% and both slam_toolbox and Nav2 now work as expected (although still spamming the same warnings at a lesser rate). Both map and map->odom transformation are being updated consistently(and according to the config file) as well as Nav2 is accepting goals.
Operating System: Ubuntu 20.04 on a Jetson Xavier NX. Jetson's mode is the highest performance-wise (20W, 6 cores).
Installation type: ROS2 Galactic is installed from source. Ubuntu 20.04 is installed following: https://qengineering.eu/install-ubuntu-20.04-on-jetson-nano.html. Slam_toolbox is installed using apt-get install.
ROS Version ROS2 Galactic
Version or commit hash: ROS2 was built as instructed by: https://docs.ros.org/en/galactic/Installation/Ubuntu-Development-Setup.html
Laser unit: Using an Intel's D455 paired with depthimage_to_laser_scan package.
Steps to reproduce issue
I initiate a launch file consisting of: D455 node, robot_localization, depth_image_to_laserscan (I plan to use a 2D lidar but I'm using the D455 for fast tests), and ros2_laser_scanmatcher (from https://github.com/AlexKaravaev/ros2...), and finally a static transform between base_link and camera_link. Up until here, everything works exceptionally well. The tf tree looks as one would expect (odom -> base_link average rate of 20 as I set in robot_localization configuration, base_link -> camera link is a static transform set by me, and the rest is the camera's frames).
By using rviz, I can also see that everything is working as expected, meaning that I set the static frame to odom, and as I move the camera the scans don't change their position, only the camera's frames do. Great. Now I plan to use slam_toolbox as well as Nav2. I'm using slam_toolbox default configuration (online_async), only remapping the robot's base frame to base_link.
Expected behavior
Slam_toolbox functioning correctly: building map consistently and publishing the map->odom transformation.
Actual behavior
The terminal is spamming these:
_[1641398181.499569062] [slam_toolbox]: Message Filter dropping message: frame 'camera_depthframe' at time 1641398181.448 for reason 'discarding message because the queue is full'
In rviz I set the static frame to odom, the scans are being shown as well every frame and updated with decent frequency, I can see the map being built right at start, and then it only updates every now and then when I move the D455 significantly. If I set the static frame to the map, I can't see the scans, neither the frames, only every now and then when the map is updated. (I'm talking every 5 seconds or more and I have to move the camera). The tf tree shows the map -> odom transformation, however, the average rate is 2590000, buffer length 0. Meanwhile, every other transformation is as expected (as before initiating slam_toolbox).
I've also done the same on ROS2 foxy with the exact same methods. In that case, the same message was being spammed, only switching the reason to 'Unknown'.
Additional information
At first I didn't mind it (before not realizing it wasn't only the map updating slowly, but the map->odom tf was messed up), and attempted to test Nav2 (again using default launch file navigation_launch, only changing every set_sim_time to false.). The terminal spams these:
_[global_costmap.global_costmap]: Timed out waiting for transform from base_link to map to become available, tf error: Lookup would require extrapolation into the past. Requested time 1641398077.397696 but the earliest data is at time 1641398167.204133, when looking up transform from frame [baselink] to frame [map]
Unsurprisingly. And if I send a goal, controller/planner crashes.
Most solutions I see are regarding the tf tree not being well made, but I don't think that's the case here. I've also seen mentioned that it could be that the Jetson simply isn't keeping up with the processing. I'm not very knowledgeable on the etiquette here, but I've posted this on ROS answers https://answers.ros.org/question/393773/slam-toolbox-message-filter-dropping-message-for-reason-discarding-message-because-the-queue-is-full/ since at first I didn't think it was a slam_toolbox problem. But, ultimately, I'm unsure.
I can add more info if needed.