cra-ros-pkg / robot_localization

robot_localization is a package of nonlinear state estimation nodes. The package was developed by Charles River Analytics, Inc. Please ask questions on answers.ros.org.
http://www.cra.com
Other
1.41k stars 902 forks source link

Latency possibly due to robot localization #255

Closed akifh closed 8 years ago

akifh commented 8 years ago

Hi all,

I'm working on fusing odom and imu for robot localization purpose. The robot has UTM30-LX LIDAR on it. After implementing robot_localization via ekf_localization, in rviz, a latency in odom frame appeared. I have checked different q/a's on answers and tried many different configurations for diff_drive_controller and robot_localization but none of these helped. May it be due to latency in robot localization? This delay propagates through gmapping and map becomes distorted.

In odom-only mode, I have less drift and obstacles stay more or less at same location. However in odom-imu mode, obstacles drift in opposite way to rotation direction and it catches up if I stop. If I keep rotating without stopping drift is accumulated and it does not catch up when stopped.

For visualization and playing back purposes, I have recorded two bags and vids.

odom-only mode:

Vid: https://www.youtube.com/watch?v=6B_GOFptyzU Rosbag: https://drive.google.com/file/d/0B7u825B2XytlMWl1MzhteTRDRzA/view?usp=sharing Recorded topics:

mobile_base_controller config:

mobile_base_controller:
  type        : "diff_drive_controller/DiffDriveController"
  left_wheel  : 'wheel_left_joint'
  right_wheel : 'wheel_right_joint'
  publish_rate: 50.0
  pose_covariance_diagonal: [0.01, 0.01, 1000000.0, 1000000.0, 1000000.0, 1.0]
  twist_covariance_diagonal: [0.01, 1000000.0, 1000000.0, 1000000.0, 1000000.0, 1.0]

  wheel_separation_multiplier: 1.000
  wheel_radius_multiplier    : 0.95

  enable_pose_covariance_update: true
  error_constant_left : 0.0
  error_constant_right: 0.0

  enable_odom_tf: true

  # Velocity commands timeout [s]
  cmd_vel_timeout: 0.25

  # Base frame_id
  base_frame_id: base_footprint

odom-imu mode:

Vid: https://www.youtube.com/watch?v=eHBAoJwPVp0 Rosbag: https://drive.google.com/file/d/0B7u825B2XytldGxGNzhDQ1kzbXM/view?usp=sharing Recorded topics:

mobile_base_controller config:

mobile_base_controller:
  type        : "diff_drive_controller/DiffDriveController"
  left_wheel  : 'wheel_left_joint'
  right_wheel : 'wheel_right_joint'
  publish_rate: 50.0
  pose_covariance_diagonal: [0.01, 0.01, 1000000.0, 1000000.0, 1000000.0, 1.0]
  twist_covariance_diagonal: [0.01, 1000000.0, 1000000.0, 1000000.0, 1000000.0, 1.0]

  wheel_separation_multiplier: 1.000
  wheel_radius_multiplier    : 0.95

  enable_pose_covariance_update: true
  error_constant_left : 0.0
  error_constant_right: 0.0

  enable_odom_tf: false

  # Velocity commands timeout [s]
  cmd_vel_timeout: 0.25

  # Base frame_id
  base_frame_id: base_footprint

ekf_localization config:

ekf_localization:
  frequency: 50
  sensor_timeout: .1
  transform_time_offset: 0.0

  odom0: /mobile_base_controller/odom
  odom0_config: [false, false, false,
                 false, false, false,
                 true, false, false,
                 false, false, true,
                 false, false, false]
  odom0_differential: true
  odom0_relative: true
  odom0_queue_size: 10

  imu0: /imu/data_raw
  imu0_config: [false, false, false,
                false, false, false,
                false, false, false,
                false, false, true,
                false, false, false]

  imu0_differential: false
  imu0_relative: true
  imu0_queue_size: 10
  imu0_remove_gravitational_acceleration: true

  odom_frame: odom
  base_link_frame: base_footprint
  world_frame: odom
  two_d_mode: true

  # x, y, z, a, b, g, x_d, y_d, z_d, a_d, b_d, g_d, x_dd, y_dd, z_dd
  initial_estimate_covariance: [1e-9, 0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    1e-9, 0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    1e-9, 0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    1e-9, 0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    1e-9, 0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    1e-9, 0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    1e-9, 0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    1e-9, 0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    1e-9, 0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    1e-9,  0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     1e-9,  0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     1e-9,  0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     1e-9, 0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    1e-9, 0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    1e-9]

  process_noise_covariance: [0.05, 0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0.05, 0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0.06, 0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0.03, 0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0.05, 0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0.06, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0.025, 0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0.025, 0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0.04, 0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0.01, 0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0.01, 0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0.2, 0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0.01, 0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0.01, 0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0.015]

Any suggestion on this issue? Is this related with r_l? If any additional information is need I can supply. Thanks. @ayrton04

ayrton04 commented 8 years ago

What version of the code are you using? This is a common problem, and part of it is that I need to remove the tf message filters, but there are some things you can do as well.

First, in your process_noise_covariance, increase the last diagonal value (yaw velocity). Your measurements for yaw velocity have a variance of 0.02, and when fused, the filter will almost certainly end up with an extremely tiny variance. At the next time step, you'll add the 0.015 to that tiny value, then fuse the measurement, but because the filter's variance for yaw velocity will be as small or smaller than the variance of the measurement's yaw velocity, it will be sluggish to fuse it. Make that last value at least an order of magnitude larger. Heck, make it 1.0, and just see if it helps.

If it doesn't, try this fork and let me know if it's better:

https://github.com/clearpathrobotics/robot_localization

I'll file a ticket to get rid of message filters, as they induce a lag in measurement processing.

akifh commented 8 years ago

I just got back to work. I will give it a try and post the results. Thanks for the reply.

NikolausDemmel commented 8 years ago

@ayrton04: You suggest using the Clearpath fork. Can you quickly summarize the main difference in that version?

ayrton04 commented 8 years ago

@NikolausDemmel the main change (there are several) is that the main run loop is threaded. The root cause of the delay here is tf2 MessageFilters. When we receive, for example, an odometry message, I break it into a pose message and a twist message, then pass each component through the same MessageFilters as I do new Pose or Twist messages. This causes an additional spin cycle to be burned, so if a message is received by the odometry callback at t1, we add it to the MessageFilter, and then (I think) a second cycle is burned processing the message, and then it finally gets processed at t3. By threading with a fast loop rate, t1, t2, and t3 happen in rapid succession, and aren't tied to the filter update rate, so the measurement is processed more or less immediately.

I plan to remove the message filters, though, so I'm not sure if there will be any benefit to threading at that point.

NikolausDemmel commented 8 years ago

@ayrton04: Thanks for the explanaition. That makes sense. I ought to try out the clearpath fork myself.

akifh commented 8 years ago

@ayrton04 , now I'm onto your suggestions. I switched to Clearpath fork. But I'm confused at first part; I thought that the last diagonal value in process_noise_covariance was z acceleration. You mentioned; "First, in your process_noise_covariance, increase the last diagonal value (yaw velocity).". Is this information correct? I will go on depending the answer.

ayrton04 commented 8 years ago

@akifh Yes, you are correct, sorry. The last diagonal value is Z acceleration variance.

akifh commented 8 years ago

Clearpath fork seems a bit faster than regular one, though it did not solve the problem. As you mentioned, this lag seems to be present due to MessageFilters. I have optimized the situation with some configuration changes. Now there is some jitter on ekf_localizaton output. (See blue line in Fig. below)

plot

Any idea on reason for this?

Recent config is this;

ekf_localization:
  frequency: 50
  sensor_timeout: .1
  transform_time_offset: 0.0

  odom0: /mobile_base_controller/odom
  odom0_config: [false, false, false,
                 false, false, false,
                 true, false, false,
                 false, false, true,
                 false, false, false]
  odom0_differential: true
  odom0_relative: true
  odom0_queue_size: 10

  imu0: /imu/data_raw
  imu0_config: [false, false, false,
                false, false, false,
                false, false, false,
                false, false, true,
                false, false, false]

  imu0_differential: false
  imu0_relative: true
  imu0_queue_size: 10
  imu0_remove_gravitational_acceleration: true

  odom_frame: odom
  base_link_frame: base_footprint
  world_frame: odom
  two_d_mode: true

  # x, y, z, a, b, g, x_d, y_d, z_d, a_d, b_d, g_d, x_dd, y_dd, z_dd
  initial_estimate_covariance: [1e-9, 0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    1e-9, 0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    1e-9, 0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    1e-9, 0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    1e-9, 0,    0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    1e-9, 0,    0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    1e-9, 0,    0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    1e-9, 0,    0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    1e-9, 0,     0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    1e-9,  0,     0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     1e-9,  0,     0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     1e-9,  0,    0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     1e-9, 0,    0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    1e-9, 0,
                                0,    0,    0,    0,    0,    0,    0,    0,    0,    0,     0,     0,     0,    0,    1e-9]

  process_noise_covariance: [0.05, 0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0.05, 0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0.06, 0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0.03, 0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0.05, 0,    0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0.06, 0,     0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0.025, 0,     0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0.025, 0,    0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0.04, 0,    0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0.01, 0,    0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0.01, 0,    0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0.02, 0,    0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0.01, 0,    0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0.01, 0,
                            0,    0,    0,    0,    0,    0,    0,     0,     0,    0,    0,    0,    0,    0,    0.015]
ayrton04 commented 8 years ago

Not convinced it's jitter; it may just be the result of having a higher output frequency than input frequency. Is this with the regular fork or the Clearpath one? If it's the latter, can you try the upstream fork?

ayrton04 commented 8 years ago

I've fixed the message filter issue in this fork (see #272); feel free to give it a shot. However, we've also been having trouble with this issue lately, and I believe a better solution (the MessageFilters help a bit, but not a lot) is to add a control term to the filters. Right now, the predicition step just projects the current angular velocity, even though we may be undergoing angular acceleration. Adding a control term will let us account for this.

akifh commented 8 years ago

For your previous question, it was with Clearpath fork. I can try with decreasing the filter frequency but in that case, I assume that time loss due to spin cycle will be a bit more. I will try #272 and report back. However, as you mentioned, adding control term seems like a better way to deal with this.

ayrton04 commented 8 years ago

Try indigo-devel on this fork for a version that uses a control term:

https://github.com/locusrobotics/robot_localization

For example control parameters, see the EKF yaml config. Note that this hasn't been rigorously tested yet.

EDIT: Note that it's looking for the control on the cmd_vel topic, so you'll have to remap if yours doesn't match.

ayrton04 commented 8 years ago

Update: I would also try decreasing your sensor timeout. If you go through a spin cycle and no measurements arrive, the filter just re-publishes the same message unless the sensor_timeout has been met. If the sensor_timeout is reached, it will carry out a prediction step, which should hopefully help smooth out the jitter you see.

ayrton04 commented 8 years ago

Hi, can you try the latest source on this fork? Try enabling the _nodelay parameter, tweaking the covariances, and if neither of those work, try the new control parameters.

ayrton04 commented 8 years ago

Is there any update on this?

akifh commented 8 years ago

@ayrton04 , I had no time to test this lately. We can close this and reopen if required, if you want.

ayrton04 commented 8 years ago

That's fine. I'll close this and you can reopen as needed. Thanks!