swri-robotics / mapviz

Modular ROS visualization tool for 2D data.
BSD 3-Clause "New" or "Revised" License
396 stars 152 forks source link

Aligning odometry and fix messages without globally referenced heading for odometry #758

Closed jmericks closed 2 years ago

jmericks commented 2 years ago

mapviz_20220522T045046_231876

Ubuntu 20.04, ROS Noetic -

I am struggling with figuring out how to align my odometry messages (which come from a visual odometry (VO) algorithm running on a stereo camera) with my globally referenced GPS points. I am attempting to use GPS as ground truth for how well my VO is running.

The VO publishes odometry message for base_link with reference to the map frame. The output is ROS standard X forward, Y left:

header: 
  seq: 247
  stamp: 
    secs: 1653192726
    nsecs: 689634559
  frame_id: "map"
child_frame_id: "base_link"
pose: 
  pose: 
    position: 
      x: 122.15685754202518
      y: 17.37065109891772
      z: -0.8045191563706255
    orientation: 
      x: 0.0
      y: 0.0
      z: 0.0
      w: 0.0

I then have a GPS /fix message with frame_id "/gps" that I receive from the nmea_navsat_driver package ( https://github.com/ros-drivers/nmea_navsat_driver ).

Neither of these messages have a globally referenced heading, wherein lies my problem. In the image above, you can see that VO outputs like its starting heading is due East, which makes sense given its starting orientation being (0,0,0,0), or due east in UTM terms. If my gps messages contained a heading, I could align the starting pose with that heading but they don't with NavSatFix messages. I thought maybe I needed to align the "/gps" and /"wgs84" frames, but when aligning these with a static transform, I got the following error and warning:

[ERROR] [1653193812.946166729]: [transform_manager]: Could not find a connection between 'wgs84' and 'map' because they are not part of the same tree.Tf has two or more unconnected trees.
[ WARN] [1653193812.946191370]: [transform_manager]: Failed to get tf transform ('map' to '/wgs84').  Both frames exist in tf.

To make my maps look pretty, I threw the VO Odometry message into a different frame ("VO_Frame"), and then manually guess and checked on a static transform between the map and "VO_Frame" until they were more or less aligned. Of course, this isn't a very rigorous way of going about this.

I am wondering if anyone might have any ideas, either with other SWRI Utilities or elsewhere, where I can find a reliable way of aligning my frames so that they can be compared.

malban commented 2 years ago

I think there are maybe a few different issues here.

1. The "wgs84" frame is not a real TF frame, but a pseudo frame used by mapviz. It represents data that is in lat/lon, which can't be transformed with a normal TF transform (translation, rotation) to other frames in TF which are in meters.

mapviz and the swri_transform_util understand this frame, and given a local origin and associated TF frame, will convert lat/lon data to the TF frame with a planar projection centered on the local origin.

That is defined in the launch file: https://github.com/swri-robotics/mapviz/blob/master/mapviz/launch/mapviz.launch#L5.  
- Note: It's important that the origin is not too far away from the data being projected.
- You can make the associated frame, which represents a local planar projection of lat/lon, anything you want, perhaps "global_map", since you are already using "map" for VO

2. The "global_map" frame then needs to be connected to the rest of the TF tree. As you said, this could be done with a static transform if you always started up the bot from the same location and pointing in the same direction, but this has the additional limitation that it won't take into account heading or position drift in the VO.

3. To fix this you'll generally want to have two state estimates, a global one in "global_map" from GPS, and the local or relative one in "map", from VO. Then a dynamic transform is published connecting the two such that the transform will convert the vehicle pose from VO state estimate in "map" frame to the global state estimate in "global_map" frame.

          needed transform
                |
                V
    global_map <--- map <--- base_link
Given the vehicle pose from the global state estimate and the vehicle pose from VO estimate, it should be straightforward to calculate and publish the transform between the two at a fixed rate.
  1. You are correct that you will need the heading of the global estimate to get everything aligned. A simple way to do this is to infer the heading from the motion of the bot in the global frame. This is only valid if the bot is in motion, so you would want to only publish the inferred heading when the velocity is above some threshold. This inferred heading would go into the global state estimate.

  2. The global state estimate could be as basic as directly converting the GPS lat/lon with the planar projection to "global_map" and using the inferred heading, or, preferably, it could be a Kalman filter that incorporates the projected GPS data, the inferred heading, wheel speed, IMU, etc to provide a smoother and higher frequency state estimate.

jmericks commented 2 years ago

@malban Thank you for the detailed explanation. I had not understood exactly what was going on in the wgs84 frame so that was complicating things. As far as global heading is concerned, I have come to a similar conclusion. I started by using the Kabsch algorithm (for aligning sets of vectors) in order to get the optimal rotation between the global (global_map) and local (map) reference frames. This is of course just an estimate, but has a good basis to it at least! I have also started recording test sets where I use a GPS compass in order to get the starting heading referenced to true north. This has helped tremendously as well.

I have used the navsat_transform node in the past and it works wonderfully for what you are describing with the Kalman filter - I was just attempting here to keep GPS and VO fully separate so that there are no confounding variables in the comparison between the two (the more that I align the VO with GPS using actual GPS points, the better VO will appear to work). This was one of the consequences I saw with the Kabsch algorithm: it wasn't concerned with aligning initial heading, but with LMSE of the entire set of vectors. Reducing the vector set to the first few points in GPS/VO made this much better as well.