introlab / rtabmap

RTAB-Map library and standalone application
https://introlab.github.io/rtabmap
Other
2.79k stars 787 forks source link

Support for absolute position/ attitude measurements #188

Open jgoppert opened 7 years ago

jgoppert commented 7 years ago

I would like to add links to the graph-slam algorithm in rtabmap for measurements of absolute position and attitude. For instance, GPS, or known landmarks in an environment. Please let me know if this is already implemented. If it hasn't been implemented I would be happy to add it and feel capable of doing so with a few pointers of where to add it in the code. I presume I would need to add a new type of sensor data.

matlabbe commented 7 years ago

Hi,

For attitude, this could be somewhat included in odometry values. For absolute position, there is currently no way in rtabmap to add them (as fixed constraints in the graph for example). With g2o graph optimization approach selected, there could be a way to do this. See OptimizerG2O::optimize(). For example, we may want to add new g2o::VertexSE3 to graph and set them as fixed like the root node. Then add g2o::EdgeSE3 between the corresponding node of the map's graph and this new absolute node (with some covariance).

SensorData could be extended with GPS coordinate (relative to start or geo-referenced) + GPS covariance.

To pass these values to g2o, that could be done in the Memory::getMetricConstraints(), which could get GPS pose like the odom pose here, then we call optimize() here with added absolute GPS_poses + GPS_covariances.

Before going into code, you could export the rtabmap's graph in g2o format (see file->Export poses...), add manually these absolute constraints to g2o file, then use g2o_viewer tool to optimize this file and see results (similar to the end of this tutorial).

cheers, Mathieu

jgoppert commented 7 years ago

Thanks, will check out adding a link on g2o.

Regarding the attitude, I have a few questions:

  1. When rtabmap starts up it seems to assume 0 roll, 0 pitch, 0 yaw, which makes sense given no other data, is this correct?
  2. It looks like odometry uses rotation rates and velocity. How can we fuse in the px4 roll/pitch/yaw in the existing setup?
matlabbe commented 7 years ago

Hi,

  1. Yes, odometry could also be initialized to a specific pose/orientation. For example, orientation roll, pitch can be set accordingly to gravity and yaw accordingly to magnetometer.

  2. In rtabmap library, velocity is not used by default, only if the experimental extended kalman filter (based on this tutorial and this one) is enabled or Odom/GuessMotion is true. To do fusion with px4, you may use an external kalman filter like robot_localization package to do fusion of rtabmap's visual odometry and px4's attitude. For example in this this example, position is taken from visual odometry and orientation is taken from IMU.

Note that this is a loosely coupled visual inertial odometry approach (IMU and image stamps synchronized by software), you may look at tightly coupled visual inertial approach to get better results (like Tango, rovio, ...).

cheers, Mathieu

jgoppert commented 7 years ago

I'm one of the two main navigation guys on the PX4 core dev team. So I can add whatever we need on the PX4 side to make this work for both loosely and tightly coupled setups I believe.

Loosely coupled

I now have a working loosely coupled setup that takes the visual odometry from rtabmap and fuses it using an invariant extended kalman filter assuming a fixed measurement covariance of the rtabmap update (x, y, z, vx, vy, vz), haven't included the visual odometry attitude measurement yet but working on it, so I don't think there is a reason to run another ekf like robot localization. Also if we ran another package like robot_localization and it computed the final estimate, we would then have to send that back to px4 to close the loop which seems messy. Here is a picture of the loosely coupled system flying in gazebo. I got this working yesterday, going very well so far. If I hook the attitude correction up to px4 this should work without a hitch correct, or will the attitude of rtabmap and px4 drift off from eachother over time? Also, Initially px4 knows the attitude for roll, pitch, yaw, so do I need to set the initial state for rtabmap with this, or can I just publish the odometry from px4 and everything will work as desired?

Update:

  1. Currently if I start facing north (0 deg) everything works (since rtabmap initial yaw angle is aligned with 0), but if I start facing another direction it break, so I need to figure this out.

image

Tightly coupled

For tightly coupled, my thought is that between rtabmap updates the EKF on PX4 updates the covariances, then sends the updated odometry with covariance message to rtabmap, then rtabmap does a slam correction with this and computes a new covariance and state delta, which we then send (only for the vehicle states, not the landmarks) back to PX4 as a state reset where P += dP , x += dx, where dP is the chance in covariance computed from slam. I do some bounding on P so it shouldn't go negative or anything crazy. Let me know if you see any problems with this approach.

RTAB-map estimator architecture

When I send rtab-map an odometry message from PX4 it looks like odometry is running another EKF, can you explain what that is for. I would think that px4 and cameras would just supply rtabmap with pose measurements and then the graph-slam would run all of that and fuse it without using another EKF, but I haven't worked with graph-slam directly yet, just read about it in probabilistic robotics. My guess is that visual odometry doesn't provide you a pose directly, so you need to turn it into a pose with a kalman filter, is that correct? If so, if px4 is doing this for us, is there a way to side-step the odometry code?

matlabbe commented 7 years ago

Loosely coupled

rtabmap node subscribes to a single odometry message, adding a new node to graph at 1 Hz, detect loop closures and republish odometry correction as /map -> /odom TF transform.

In the hand-held tutorial, odometry is computed from rgbd_odometry node. By default, rgbd_odometry node doesn't use any probabilistic filtering: No Kalman filter is used in rgbd_odometry or rtabmap node.

Not sure how without robot_localization you did the fusion of PX4 and rgbd_odometry. Do you have a schema of the nodes and messages used between them? Is rgbd_odometry code modified? The setup of sensor_fusion.launch is using a single ekf (robot_localization): 14550571761943305 (Image from this post, actually /odm_imu is a sensor_msgs/Imu msg, not nav_msgs/Odometry msg)

With robot_localization, it is possible to estimate using only velocities (or use absolute poses as differential) to avoid problems where visual odometry and imu absolute orientation diverge.

Tightly coupled

Here you should forget rtabmap node and focus on fusion of rgbd_odometry output and PX4 output. rtabmap is after sensor fusion and would not have feedback to PX4 and rgbd_odometry. However, by tightly I referred to have a feedback from PX4 into rgbd_odometry, or vice-versa. Implementing the visual odometry approach along the PX4 kalman filter for example (visual odometry uses filter prediction + IMU to compute visual transform). If visual computation is impossible (e.g., camera occluded, no visual features), the filter is still computing poses using attitude/accelerometer of PX4.

RTAB-Map estimator architecture

Yes, rtabmap requires a single pose. If multiple sensors are used to compute this pose, they should be be merged before rtabmap node (in this case, robot_localization can be used for fusion of multiple position/orientation sources). Visual odometry generates 6DoF poses (as odometry message), so they can be used directly as input to rtabmap. You could also try to feed odometry messages from PX4 directly to rtabmap, and not starting rgbd_odometry node to compare.

cheers, Mathieu

jgoppert commented 7 years ago

Thanks!

Focusing on Loosely Coupled First

image

(The px4 code is modified, working on a new branch to support integration with rtabmap) Here is the code for fusing the visual odometry: https://github.com/PX4/Firmware/blob/3330fb22e8a3eb554bb9d4d1930f524dafa13757/src/modules/iekf/corrections/vision.cpp

I'm sending the message via mavlink with this patch to mavros: https://github.com/mavlink/mavros/pull/706

I didn't modify any of your code.

It is effectively the same as the robot_localization but takes place on the stm32 chip. It takes imu data and the visual data. It also takes in a lot more sensor data like laser altimeter, magnetometer etc. I have a full odometry message with covariance as output, which I send to odom_filtered.

matlabbe commented 7 years ago

Yeah, this is a more efficient way than adding another ekf, as mavros node already does it. Thx for the links.

cheers, Mathieu

jgoppert commented 7 years ago

Great, glad I'm headed in the right direction. I still have an issue where if I rotate more than 10 degrees or so it will diverge. I suspect that maybe there is some confusion on what frame the velocity is expressed in. I was expecting the visual odometry to give results in ENU, does it instead use body forward, left, up?

Also, for the guess frame, I'm using fcu and ground_truth as local origin, so this should fix the initialization issue and it does show in the message log that it sets the yaw to 180 deg if I initialize it in gazebo pointing south. For the guess frame I'm also using fcu, with the though that the tf should be updated from px4 with the latest for the next iteration. Is all of this logic correct, or did I slip up somewhere?

  <group ns="rtabmap">
    <!-- Visual Odometry -->
    <node pkg="rtabmap_ros" type="rgbd_odometry" name="visual_odometry" output="$(arg rtabmap_output)" args="$(arg rtabmap_args)" required="true">
      <remap from="rgb/image"       to="$(arg rgb_topic)"/>
      <remap from="depth/image"     to="$(arg depth_topic)"/>
      <remap from="rgb/camera_info" to="$(arg camera_info_topic)"/>
      <remap from="odom"            to="/mavros/odometry/odom"/>

      <param name="frame_id"               type="string" value="fcu"/>
      <param name="odom_frame_id"          type="string" value="map"/>
      <param name="publish_tf"             type="bool"   value="true"/>
      <param name="publish_null_when_lost" type="bool"   value="false"/>
      <param name="guess_from_tf"          type="bool"   value="true"/>
      <param name="ground_truth_frame_id"  type="string" value="local_origin"/>
      <param name="guess_frame_id"         type="string" value="fcu"/>

      <param name="Odom/FillInfoData"      type="string" value="true"/>
      <param name="Odom/ResetCountdown"    type="string" value="1"/>
      <param name="Vis/FeatureType"        type="string" value="6"/> 
      <param name="OdomF2M/MaxSize"        type="string" value="1000"/>
    </node>
jgoppert commented 7 years ago

My fault with the twist velocity interpretation, just found this, here: http://wiki.ros.org/navigation/Tutorials/RobotSetup/Odom, so body frame is the correct convention.

# This represents an estimate of a position and velocity in free space.  
# The pose in this message should be specified in the coordinate frame given by header.frame_id.
# The twist in this message should be specified in the coordinate frame given by the child_frame
matlabbe commented 7 years ago

Hi,

See http://docs.ros.org/api/nav_msgs/html/msg/Odometry.html for ros standard. The twist is in robot frame (x forward, y left, z up). EDIT I didn't see the previous post at time I was answering the question, so yes as you pointed out too.

You may not want to set publish_tf=true if you do fusion, as we can assume that it is the node doing the fusion that will publish /odom -> /fcu TF. odom_frame_id would be the frame of your node doing odometry fusion (which would be /odom). Well, what are the basic frames published by mavros? ($ rosrun tf view_frames) What is the meaning of /map (if already published by mavros)?

rtabmap would publish /map -> /odom and mavros would publish /odom -> /fcu. Unless mavros does relocalization, normally there should be no /map from mavros.

If you want to initialize rgbd_odometry pose, don't use ground_truth_frame_id. It is there for a very specific use for rgbdslam_datasets.launch. Use /rtabmap/reset_odom_to_pose service instead.

Note that if you want a pure frame-to-frame odometry (maybe more suitable for sensor fusion), set Odom/Strategy=1 (parameter Odom/VisKeyFrameThr determines when a key frame is generated). To actually use the guess from TF (the output of ekf), set Odom/GuessMotion=true. For more info about these parameters: $ rosrun rtabmap_ros rgbd_odometry | grep Odom

You may also only use velocity from visual odometry messages (ignoring pose) in ekf. Just verify that values are not null (if odometry was lost last frame or was just initialized) before adding them to ekf. That may resolve the problems of poses diverging.

cheers, Mathieu

jgoppert commented 7 years ago

Ok, thanks to your help it is now working. We have loop closure issues that we can resolve with vertigo, but can't use it due to the GPL license in this project. Any tips there to prevent bad loop closures without vertigo? I've been tinkering with params, but haven't had much luck.

I also exported the pose graph, just see vertex and edge list, no clear absolute distinction to any of the vertices, but looking more into the docs.

image

image

VERTEX_SE3:QUAT 174 1.894403 1.314536 2.917456 -0.030801 -0.002804 0.742362 0.669284
VERTEX_SE3:QUAT 175 1.884756 1.345980 3.762947 -0.023771 -0.004275 0.742373 0.669552
EDGE_SE3:QUAT 1 26 -0.018654 -0.083029 0.990827 0.001753 -0.006879 -0.000753 0.999975 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 1.000000
EDGE_SE3:QUAT 26 27 -0.048811 -0.026146 0.564595 -0.000990 0.002182 -0.000070 0.999997 1.000000 0.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 1.000000 0.000000 1.000000
E
matlabbe commented 7 years ago

Hi,

The best way to reject bad loop closures is to lower "RGBD/OptimizeMaxError" (default 1m) to something like 0.1 or 0.05 if your odometry doesn't drift too much.

cheers, Mathieu

jgoppert commented 7 years ago

@matlabbe I got absolute position and attitude measurements working with g2o/ g2o_viewer, see below. Also I noticed that on the export the information matrices are all the identity matrix in rtabmap. I think if we have the covariance of the pose from an estimator like px4, we might as well set the information matrix for the EDGE_SE3_PRIOR. This will also help with bad loop closures since radically changing the vehicle attitude will have a higher cost associated with it than just translation. Setting it to 1 is effectively an angle standard deviation of 57 degrees, which is huge.

Also for RGBD-Odometry it seems like we should have a default information matrix that is not identity to improve performance.

So if you agree with these changes, let me know how best to proceed next with including the absolute position and attitude measurements. My first thought is to take in an odometry measurement from an EKF like the px4 and use it to set the priors for each vertex. Would this best fit into the SensorData structure or somehwere else?

PARAMS_SE3OFFSET 1 0 0 0 0 0 0 1 

# inital vertices
VERTEX_SE3:QUAT 1 -1 -1 0 0 0 0 1
VERTEX_SE3:QUAT 2 1 -1 0 0 0 0 1
VERTEX_SE3:QUAT 3 1 1 0 0 0 0 1
VERTEX_SE3:QUAT 4 -1 1 0 0 0 0 1

# odometry
EDGE_SE3:QUAT 1 2  2  0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 10 0 0 10 0 10
EDGE_SE3:QUAT 2 3  0  2 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 10 0 0 10 0 10
EDGE_SE3:QUAT 3 4 -2  0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 10 0 0 10 0 10
EDGE_SE3:QUAT 4 1  0 -2 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 10 0 0 10 0 10

# bad loop closure
EDGE_SE3:QUAT 1 2  0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 0 1 0 1

# prior
EDGE_SE3_PRIOR 1 1 -1 -1 0 0 0 0 1 10 0 0 0 0 0 10 0 0 0 0 10 0 0 0 10 0 0 10 0 10
EDGE_SE3_PRIOR 2 1 1 -1 0 0 0 0 1 10 0 0 0 0 0 10 0 0 0 0 10 0 0 0 10 0 0 10 0 10
EDGE_SE3_PRIOR 3 1 1 1 0 0 0 0 1 10 0 0 0 0 0 10 0 0 0 0 10 0 0 0 10 0 0 10 0 10
EDGE_SE3_PRIOR 4 1 -1 1 0 0 0 0 1 10 0 0 0 0 0 10 0 0 0 0 10 0 0 0 10 0 0 10 0 10

Odometry

image

Assuming perfect odometry in this case so it matches ground truth. This is so I can highlight effects of the proposed changes on loop closures.

Bad Loop Closure

I connect the bottom vertices of the square to simulate a bad loop closure.

Current Approach (no priors/ identity information matrix)

image

Increase Attitude Information for Odometry

image

Including Priors (absolute position/ attitude measurement)

image

matlabbe commented 7 years ago

Hi,

rgbd_odometry would output odometry with covariance values. Covariance is equal to variance of matching features. Some other visual odometry packages (like viso2) set it to a fixed value. You may do the similar in node receiving visual odometry or subscribing/republishing with different values. Loop closure links have covariance set the same was than visual odometry.

Note that if odometry is always consistent, setting "RGBD/OptimizeMaxError" low would reject all bad loop closures. However, if odometry is not consistent (some unknown events that would cause large drifts like wheels slipping for a ground robot), then "RGBD/OptimizeMaxError" set low would reject also good loop closures.

If attitude is very precise, covariance of the odometry fusion's output for angular values could be set very low, so that g2o correct more the position than the orientation in the graph on loop closures.

If useful, I could add new parameters to set fixed covariances of odometry and loop closure detection.

cheers, Mathieu

jgoppert commented 7 years ago

Ok, when I exported poses to g2o why were all of the covariance matrices the identity matrix?

Also, what about using edge priors in g2o that is my main goal for this issue, to be able to pin down graph nodes when I get an absolute measurement.

jgoppert commented 7 years ago

Regarding the covariance being identity I bet it is set to that in mavros since that data wasn't typically being sent yet, so will fix that.

jgoppert commented 7 years ago

Summary of my current approach

  1. I have known measurements of absolute position (x, y, z) from landmarks (like GPS), and absolute attitude measurements from the PX4 (magnetometer/ accelerometer).
  2. PX4 already fuses the landmark data, attitude measurements, and odometry data using an EKF on the PX4 and produce an estimate with a full state covariance matrix. When I don't see a global landmark for awhile the state covariance increases as you would expect. This should allow graphslam to move these vertices more when solving the optimization problem.
  3. I'm thinking that I should have 0 relative motion constraints (EDGE_SE3:QUAT ), and convert them all into pose priors (EDGE_SE3_PRIOR). The information matrix for EDGE_SE3_PRIOR I will set to the inverse of the PX4 state covariance matrix passed via the odometry message form mavros.

Questions:

  1. How do I integrate this type of navigation scheme with rtabmap best (no relative motion constraints), and all prior constraints?
  2. Do you see any downside to this approach?

Thanks for all of your help on this, James

matlabbe commented 7 years ago

I reverified, rgbd_odometry should set covariance values as described in previous post:

# from hand-held tutorial
$ roslaunch freenect_launch freenect.launch depth_registration:=true
$ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start"

$ rostopic echo /rtabmap/odom
header: 
  seq: 606
  stamp: 
    secs: 1494965146
    nsecs: 113498192
  frame_id: odom
child_frame_id: camera_link
pose: 
  pose: 
    position: 
      x: 0.327120184898
      y: -0.287680566311
      z: 0.384312927723
    orientation: 
      x: 0.124392555296
      y: -0.219980743163
      z: -0.0724419433602
      w: 0.964824946603
  covariance: [0.00011628900392679498, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00011628900392679498, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.00011628900392679498, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.9462406271486543e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.9462406271486543e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.9462406271486543e-05]
twist: 
  twist: 
    linear: 
      x: -0.686170637608
      y: 0.237332239747
      z: 0.392648518085
    angular: 
      x: 0.0189439728856
      y: 0.121301807463
      z: -0.066315844655
  covariance: [5.814450196339749e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.814450196339749e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.814450196339749e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 9.731203135743272e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 9.731203135743272e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 9.731203135743272e-06]

When exporting in g2o format:

VERTEX_SE3:QUAT 1 -0.000264 -0.020821 -0.005732 0.001622 -0.001851 0.006395 0.999977
VERTEX_SE3:QUAT 10 -0.048077 -0.172588 0.031531 0.073826 0.022054 0.126914 0.988917
VERTEX_SE3:QUAT 11 0.028653 -0.455130 0.031399 0.074044 0.009611 0.154616 0.985149
VERTEX_SE3:QUAT 12 0.219104 -0.149578 0.089803 0.086390 0.011465 0.025154 0.995878
VERTEX_SE3:QUAT 14 0.283908 -0.236820 0.243522 0.139159 -0.089186 -0.046785 0.985135
VERTEX_SE3:QUAT 15 0.301203 -0.247866 0.310505 0.145883 -0.172697 -0.071636 0.971474
VERTEX_SE3:QUAT 16 0.302610 -0.341667 0.393317 0.154707 -0.214412 -0.003808 0.964406
VERTEX_SE3:QUAT 18 0.219465 -0.330406 0.271169 0.149435 -0.289259 -0.023809 0.945215
VERTEX_SE3:QUAT 20 0.302071 -0.289403 0.291125 0.121067 -0.220865 -0.060891 0.965844
VERTEX_SE3:QUAT 21 0.003798 -0.199685 0.086414 0.019755 0.078489 0.101919 0.991495
VERTEX_SE3:QUAT 22 -0.006870 -0.015882 -0.014250 -0.002355 -0.001721 0.001351 0.999995
EDGE_SE3:QUAT 1 10 -0.049526 -0.151019 0.038107 0.072622 0.023675 0.120406 0.989782 38938.720125 0.000000 0.000000 0.000000 0.000000 0.000000 38938.720125 0.000000 0.000000 0.000000 0.000000 38938.720125 0.000000 0.000000 0.000000 28494.569368 0.000000 0.000000 28494.569368 0.000000 28494.569368
EDGE_SE3:QUAT 1 22 -0.006585 0.004995 -0.008532 -0.003987 0.000142 -0.005036 0.999979 298987.654762 0.000000 0.000000 0.000000 0.000000 0.000000 298987.654762 0.000000 0.000000 0.000000 0.000000 298987.654762 0.000000 0.000000 0.000000 352516.717604 0.000000 0.000000 352516.717604 0.000000 352516.717604
EDGE_SE3:QUAT 10 11 0.002344 -0.289392 0.044328 -0.001697 -0.010205 0.028797 0.999532 30970.937140 0.000000 0.000000 0.000000 0.000000 0.000000 30970.937140 0.000000 0.000000 0.000000 0.000000 30970.937140 0.000000 0.000000 0.000000 47332.634212 0.000000 0.000000 47332.634212 0.000000 47332.634212
EDGE_SE3:QUAT 11 12 0.275060 0.238539 0.022052 0.012899 -0.009772 -0.129217 0.991484 26875.504880 0.000000 0.000000 0.000000 0.000000 0.000000 26875.504880 0.000000 0.000000 0.000000 0.000000 26875.504880 0.000000 0.000000 0.000000 36659.464476 0.000000 0.000000 36659.464476 0.000000 36659.464476
EDGE_SE3:QUAT 12 14 0.057336 -0.062403 0.168142 0.051773 -0.107654 -0.062072 0.990897 9133.985602 0.000000 0.000000 0.000000 0.000000 0.000000 9133.985602 0.000000 0.000000 0.000000 0.000000 9133.985602 0.000000 0.000000 0.000000 10099.608253 0.000000 0.000000 10099.608253 0.000000 10099.608253
EDGE_SE3:QUAT 14 15 0.038839 0.014620 0.076045 0.010310 -0.085839 -0.014450 0.996151 1406.786362 0.000000 0.000000 0.000000 0.000000 0.000000 1406.786362 0.000000 0.000000 0.000000 0.000000 1406.786362 0.000000 0.000000 0.000000 2160.100118 0.000000 0.000000 2160.100118 0.000000 2160.100118
EDGE_SE3:QUAT 15 16 0.069745 -0.051817 0.120451 0.025752 -0.019475 0.064140 0.997418 298.726677 0.000000 0.000000 0.000000 0.000000 0.000000 298.726677 0.000000 0.000000 0.000000 0.000000 298.726677 0.000000 0.000000 0.000000 885.806193 0.000000 0.000000 885.806193 0.000000 885.806193
EDGE_SE3:QUAT 16 18 -0.096977 -0.010341 -0.050004 -0.005804 -0.076984 -0.007970 0.996984 707.330078 0.000000 0.000000 0.000000 0.000000 0.000000 707.330078 0.000000 0.000000 0.000000 0.000000 707.330078 0.000000 0.000000 0.000000 1684.409892 0.000000 0.000000 1684.409892 0.000000 1684.409892
EDGE_SE3:QUAT 18 20 0.087575 0.045764 -0.033765 -0.042145 0.065795 -0.037247 0.996247 782.835967 0.000000 0.000000 0.000000 0.000000 0.000000 782.835967 0.000000 0.000000 0.000000 0.000000 782.835967 0.000000 0.000000 0.000000 3135.765953 0.000000 0.000000 3135.765953 0.000000 3135.765953
EDGE_SE3:QUAT 20 21 -0.303888 0.038375 -0.013373 -0.081818 0.312355 0.143715 0.935460 640.498578 0.000000 0.000000 0.000000 0.000000 0.000000 640.498578 0.000000 0.000000 0.000000 0.000000 640.498578 0.000000 0.000000 0.000000 902.064703 0.000000 0.000000 902.064703 0.000000 902.064703
EDGE_SE3:QUAT 21 22 0.043457 0.176353 -0.102650 -0.022371 -0.079929 -0.100730 0.991446 3910.994642 0.000000 0.000000 0.000000 0.000000 0.000000 3910.994642 0.000000 0.000000 0.000000 0.000000 3910.994642 0.000000 0.000000 0.000000 4040.025525 0.000000 0.000000 4040.025525 0.000000 4040.025525

If rtabmap is using TF for odometry (odom_frame_id is set in rtabmap node), by default Identity is set for covariance. When TF is used to get odometry, covariance can be fixed with odom_tf_angular_variance and odom_tf_linear_variance parameters. Example using rtabmap.launch (this only works with latest rtabmap built from source):

$ roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start" odom_frame_id:=odom odom_tf_angular_variance:=0.01 odom_tf_linear_variance:=0.01

In your case, make rtabmap subscribe to odometry message instead to get covariance set by your ekf node.


rtabmap uses covariance set in the twist, not the pose (which would grow out of bound), to set covariance of the links in the graph. See here for info. Note that I am currently updating rtabmap library to save the full covariance/information matrix instead of max linear/angle (originally TORO was used for graph optimization, which used only max linear and angular values).

Good questions, but I'm not enough comfortable with g2o to really give good advices on how to do that. What I had in my head was to assume all poses in the graph are relative from the start (output of ekf) with loop closures recorrecting these poses, in a SLAM fashion as GPS or known landmarks may not always be available. The absolute poses (GPS, landmark) would be integrated along the relative poses, adding constraints so that the graph is optimized toward them. These constraints would also help to reject wrong loop closures at the same time (with RGBD/OptimizeMaxError), as graph error would be large.

cheers, Mathieu

jgoppert commented 7 years ago

Thanks for checking the covariance.

The absolute poses (GPS, landmark) would be integrated along the relative poses, adding constraints so that the graph is optimized toward them.

I feel comfortable adding this now. Is a new SensorData type how you would envision adding it? On the g2o side it would just be adding an EDGE_PRIOR to the relative node.

matlabbe commented 7 years ago

The ground truth field could be used to set absolute poses. Is there any standard GPS message? If GPS messages are not always available, we could use TF to get an absolute pose (/world -> /gps) if transform exists, then set it to SensorData's groundTruth pose before it is processed by rtabmap in CoreWrapper node. We could follow instructions from this post above to get groundTruth pose to g2o.

I could add "gpsPose" + "gpsCovariance" fields to SensorData/Database if this could work well (to keep groundTruth field available for a real ground truth).

cheers, Mathieu

jgoppert commented 7 years ago

That sounds like a good plan. But it would be good to keep it more general than gps specific. I'm actually using this for known visual markers, not gps at the moment.

matlabbe commented 7 years ago

If there is only one marker seen at the same time (like single GPS value), we could use a single field "absolutePose"+"absolutePoseCovariance" (or "geoPose"+"geoCovariance"... any preference?). I'm thinking that this pose could be a geometry_msgs::PoseWithCovariance msg. As this absolute pose may not be always available, we cannot add it to synchronized input topics of rtabmap. We should use a not synchronized callback, similar to user_data_async used for userData (see this tutorial for example). The topic could be called absolute_pose (or geo_pose) and the pose will be added to latest node added to graph. The topic should be published slower than rtabmap update rate (default 1 Hz), otherwise only latest pose will be kept on the same node.

cheers, Mathieu

jgoppert commented 7 years ago

The visual markers are actually uniformly distributed in the environment and unique. I see a new one about every 3 seconds and it is visible for about 500 ms. With gps, the update rate is 5 hz.

jgoppert commented 7 years ago

I think a pose with covariance would be good.

matlabbe commented 7 years ago

So for a single node added to graph, would you want to refer multiple markers + GPS (one or many GPS poses)? Not sure how to do this cleanly. In the setup I described, there would be only single pose added to referred node in the graph. That pose could be already a fusion of the markers' poses and GPS pose. The idea is to limit the number of constraints added to graph, because adding a lot would reduce time performance of g2o, so at most one single absolute constraint per node could be enough.

jgoppert commented 7 years ago

I think at roughly 1 Hz we should add a relative link (binary edge) between consecutive poses (like you are already doing) and also add an absolute link (unary edge prior) to the graph. If the EKF doesn't know anything about position, the position covariance in the nav_msgs::Odometry msg will be very high and the unary edge won't greatly impact the graph. The relative links will dominate. If the EKF position covariance goes low because it is getting GPS or vision landmarks it will make the absolute links dominate. This is also backwards compatible with your existing setup. We can have a flag that enables addition of the unary edges.