hsp-iit / project-ergocub

A demonstration of human-robot interaction on the ergoCub through vision.
1 stars 0 forks source link

ekf from `robot_localization` for odometry sensor fusion #120

Open SimoneMic opened 1 year ago

SimoneMic commented 1 year ago

Try to test robot_localization for fusing the waist_imu information on yaw with the direct kinematics of the walking-controller

The first test with "vanilla" parameters of the covariance matrix gives worst result than the standard localization with normal odom and AMCL (note: AMCL is still used with the fused odom, also with the same parameters).

Example 1:

https://github.com/hsp-iit/project-ergocub/assets/86918431/7e53326e-637b-4596-876b-bfd33c46d5c2

In this case there is a visualization oscillation in Rviz2. This is due to the transform odom->root_link which is being fused between the imu and the infos from the internal kinematic. This causes oscillations in roll and pitch, but shouldn't influence the rest of the navigation since it's in 2D. In this case the planner stopped since it has seen obstacles over the goal (the depth of camera detected the floor as an obstacle).

We could start fuse not only the yaw info from the IMU to try to improve the overall behavior, also the ekf must be tuned.

SimoneMic commented 1 year ago

As a benchmark, I leave a normal localization and navigation using our custom theta-star planner as a reference

https://github.com/hsp-iit/project-ergocub/assets/86918431/d07943c0-7e19-462d-bd48-9bbf6d701a81

In this case, noise from the pointcloud was seen as an obstacle and thus the planner decided to take the longer path. This is due to the fact that the whole costmap is treated as lethal area (so also a point could block the corridor)