Closed osrf-migration closed 5 years ago
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
p.s. I just realized that it could be related to new "unit test" world (I am using worldName:=simple_tunnel_01
), so bottleneck is no longer simulation speed (in my case 54 simulated second per minute) but TCP communication?! Is there s way how to manually reduce the simulation speed (as alternative to reduce number of messages)?
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
p.s.2 for tunnel_circuit_practice_01
I get 1min report:
0:01:00.024265 (13.3 -3.8 0.0) [('acc', 330), ('pose2d', 500), ('rot', 330), ('scan', 341), ('sim_time_sec', 34)]
… so IMU (acc, rot) is almost eh same as number of scans (everything is downsampled to 10Hz), simulated 34s
and for
simple_tunnel_01
I had
0:01:00.019073 (22.0 -3.5 0.0) [('acc', 467), ('pose2d', 536), ('rot', 467), ('scan', 538), ('sim_time_sec', 54)]
… in both case there is still issue with irregular odometry (pose2d), but that is disscused in separate issue.
Original comment by Addisu Z. Taddese (Bitbucket: azeey, GitHub: azeey).
Martin Dlouhy (robotikacz) The odometry message was using real time instead of sim time and that caused the issue you reported in #170. Now that that is fixed, is this still an issue?
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
IMU is separate issue and it still persists - I will double check, but as far as I remember the number of IMU messages (after downsample to 10Hz on receiver side) had lower count in 1 minute then odometry, so the lag is growing over time.
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
Confirmed:
0:01:00.092156 (12.6 -3.9 0.0) [('acc', 329), ('pose2d', 333), ('rot', 329), ('scan', 333), ('sim_time_sec', 34)]
0:02:00.014756 (13.9 10.6 0.0) [('acc', 333), ('pose2d', 336), ('rot', 333), ('scan', 337), ('sim_time_sec', 34)]
so in 34 simulated seconds (which takes 1 minute on our computer) there are less messages from IMU (acc, rot) then for odometry (pose2d) and lidar scan … over time this lag grows to unusable state. I am droping 24/25 received messages, so at the moment I am not sure what else could I do?
Original comment by Addisu Z. Taddese (Bitbucket: azeey, GitHub: azeey).
I have not been able to reproduce the problem. I tried setting my CPU frequency to get a similar RTF (~60%) as yours and record the data. Here's the rosbag info after 1 minute of Wall time and ~32 seconds of sim time
path: data_rate_slow.bag
version: 2.0
duration: 31.8s
start: Dec 31 1969 18:00:57.08 (57.08)
end: Dec 31 1969 18:01:28.84 (88.84)
size: 7.7 MB
messages: 10164
compression: none [11/11 chunks]
types: nav_msgs/Odometry [cd5e73d190d741a2f92e81eda573aca7]
sensor_msgs/Imu [6a62c6daae103f4ff57a132d6f95cec2]
sensor_msgs/LaserScan [90c7ef2dc6895d81024acba2ac42f369]
topics: /X2/front_scan 635 msgs : sensor_msgs/LaserScan
/X2/imu/data 7941 msgs : sensor_msgs/Imu
/X2/odom 1588 msgs : nav_msgs/Odometry
From that, I can see that the frequencies for LaserScan, Imu, and Odometry are 20 Hz, 250 Hz, and 50 Hz respectively. These are the expected frequencies.
Can I ask how you are downsampling?
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
I use “ROS proxy” with TCP connection to talk to the ROS master, i.e. non-ROS code. It could be also issues on our simulation server … but why do you actually use 250Hz update for IMU now? I checked older log file subt-x2-left-190303_150622.log from qualification/Gazebo times and there was:
22 rosmsg_imu.rot 17621 | 2250 | 4.2Hz
23 rosmsg_imu.acc 19219 | 2250 | 4.2Hz
24 rosmsg_laser.scan 1920688 | 900 | 1.7Hz
25 rosmsg_image.image 67108672 | 757 | 1.4Hz
26 rosmsg_odom.pose2d 19616 | 2249 | 4.2Hz
27 rosmsg_odom.sim_time_sec 46 | 46 | 0.1Hz
so the odometry frequency was the same as for IMU and it used to be 2250/46 ~ 48.91 so probably 50Hz. Lidar was 900/46 ~ 19.56 ~ 20Hz and camera 757/46 ~ 16.45 … this is bit strange, but my point is about 1:1 odometry and IMU, much slower IMU.
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
p.s. note, that before no “down sampling” was not necessary - all messages were used & processed.
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
I changed priority to blocker, because without real-time IMU I cannot detect collision in time, and it also does not have any sense to notice that robot is up-side down a second later. I would repeat my last question: Why did you increase IMU frequency 5 times when compared to qualification?! I will also change it to “bug” as in my case throwing 96% of received IMU messages does not help. The situation on production machine could be different but without #176 I will not able to analyze it anyway.
Original comment by Alfredo Bencomo (Bitbucket: bencomo).
Besides the reporter of the issue, does anyone else competing in the tunnel circuit challenge want to low the frequency of the IMU to 50Hz? :thumbsup: :thumbsdown:
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
Maybe an alternative would be to publish also “filtered data” at 50Hz?
Original comment by Sophisticated Engineering (Bitbucket: sopheng).
Until now, I‘m not sure whether 50Hz of IMU data makes sense or not.
But I‘m afraid that there is a performance issue that we currently cannot describe in detail but might show up in different areas in the future. As an example that worries me too is the PR #256. There the big amount of error messages in robot_localization were reduced by reducing the rate from 1000Hz to 250Hz. But there are still error messages in ignition. In the same node in Gazebo there were no such error messages. Could it be possible that the root issue there is the same problem as the problem that shows up here?
Original comment by Alfredo Bencomo (Bitbucket: bencomo).
Martin,
Are you running the SubT Simulator using the OSRF Docker image
or a catkin
workspace?
Original comment by Alfredo Bencomo (Bitbucket: bencomo).
Then, can you please test the latest image(s) from today to see if that fixes the problem you are having?
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
Great! I am going to try it now, thank you very much :slight_smile:
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
Did you reduce the frequency or introduce some new topic?
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
So far I can confirm that the number of messages (tested on simple tunnel 01) is the same:
0:01:00.064495 (-3.5 -14.2 -0.0) [('acc', 541), ('pose2d', 540), ('rot', 541), ('scan', 541), ('sim_time_sec', 55)]
i.e. so far so good … I will check the log file soon (I am in Germany on “Robotour 2019” so connectivity is a bit problem now).
Thanks a lot! :slight_smile:
Original comment by Martin Dlouhy (Bitbucket: robotikacz).
p.s. I checked a small log and I will have to test/improve lidar vs. IMU synchronization (probably removing downsampling will help), but it is globally consistent and seems to be without lag, so perfect! I will create new issue if something shows up, thanks.
Original comment by Alfredo Bencomo (Bitbucket: bencomo).
Thanks Martin for testing it. I’m glad it is working better for you now.
Good luck at RoboTour 2019.
Original report (archived issue) by Martin Dlouhy (Bitbucket: robotikacz).
Is it possible to change X2 IMU update frequency from 250Hz to say 50Hz? It looks like in 2 minutes simulation it is 10s behind the odometry (running at 50Hz).