locusrobotics / fuse

The fuse stack provides a general architecture for performing sensor fusion live on a robot. Some possible applications include state estimation, localization, mapping, and calibration.
Other
678 stars 112 forks source link

Does Fuse support fusing GNSS and IMU data? #200

Open gustavovelascoh opened 3 years ago

gustavovelascoh commented 3 years ago

Hi, just wonder if fusing GNSS and IMU data is currently supported or has been done using Fuse.

Thanks.

svwilliams commented 3 years ago

There is an IMU sensor model available here. It is treated basically the same way as is done in robot_localization

In a very basic way, GPS data could be treated as a direct position/pose measurement in the map frame using a "pose" sensor.

So, in some very basic way, it is possible to fuse IMU and GNSS data right now.

But I'll happily take submissions for a sensor that does a better job modelling GPS/GNSS measurements, or for better IMU models. Or you can easily create your own fuse plugin repo and share sensor models with the community that way. If you are interested in creating your own sensor models and need any help getting started, just let me know. I'll happily walk you through the creation of a new sensor type.

gustavovelascoh commented 3 years ago

Thanks for your quick reply. I will have a deeper look at the package and give it a try. Is there any guide/tutorial available to follow?

svwilliams commented 3 years ago

Not really. Which is frustrating, I know. I will eventually get some tutorials put together. But until then, I'll share an informal conversation I've had with another user:

fuse is "just" a wrapper around the Google Ceres Solver optimization engine. The sensor plugins generate measurement constraints/cost functions/factors/whatever from sensor data. Those measurement constraints are then sent to an "optimizer", which incorporates all of the constraints from all of the sensors and solves for the best estimate of all states/variables. So an "optimizer" class will be the thing that launches all of the senor plugins. fuse ships with two different variants: a batch optimizer and fixed-lag optimizer. A batch optimizer keeps all of the constraints and variables forever. This is useful for mapping/SLAM type applications. The fixed-lag smoother maintains a fixed-lag time window. When a variable is older than the lag size, it is automatically removed (marginalized) from system. This is useful for localization type applications and is similar in nature to the robot_localization's EKF.

In addition to Sensors and Optimizers, fuse also defines Motion Models and Publishers.

A motion model is very similar to a sensor, only it generates constraints in response to a request, rather than in response to a received sensor measurement. This is used by fuse to connect the various sensor measurements together. In your proposed example with IMU and odometry sensors, the IMU and odometry messages probably publish at different frequencies and at different times. So the IMU may publish at t=0.1s, t=0.2s, t=0.3, t=0.4s, etc. while the odometry sensor publishes at t=0.15s, t=0.30s, t=0.45s, etc. The optimizer uses the configured motion models to generate constraints between each received sensor measurement. So in the above case, we end up with motion mode constraints from t=0.0s -> t=0.1s, t=0.1s -> t=0.15s, t=0.15 -> t=0.2s, t=0.2s ->t=0.3s, t=0.3s -> t=0.4s, t=0.4s -> t=0.45s, etc

And publisher models receive the updated state variables after each optimization cycle and publish something useful out to ROS. This could be the latest state, or a trajectory of the last N states, or even a map (hypothetically....I don't have publisher for maps yet).

And now to the configuration. A basic system will launch the fixed lag smoother node:

<node name="smoother" pkg="fuse_optimizers" type="fixed_lag_smoother_node" output="screen">
  <rosparam command="load" file="$(find my_package)/config/smoother.yaml" subst_value="true"/>
</node>

And then the config file must be generated:

# Fixed-lag smoother configuration
optimization_frequency: 10.0  # The best-effort frequency (in Hz) of the optimization loop. This means that the optimizer will run every 0.1s and trigger all of the publishers when it is done
lag_duration: 1.0  # The size of the fixed-lag window, in seconds. The larger the window, the more variables will be inside the optimizer at any given time, which affects CPU usage.

# Define the set of motion model plugins to load
# Currently fuse only includes a 2D motion model. This is roughly equivalent to the robot_localization state, including the linear and angular position, the linear and angular velocity, and the linear acceleration. A turtlebot is a 2D robot, so this is fine.
# Multiple motion models can be loaded, and each sensor defines which set of motion models are used. We only have one motion model currently, and all sensors should use it.
motion_models:
  unicycle_motion_model:
    type: fuse_models::Unicycle2D

# Next define the set of sensor plugins to load
# You naturally want an IMU sensor and an Odometry sensor
# Additionally, you may want an "ignition" sensor. Ignition sensors tell the optimizer when to start running and what the initial state should be. fuse ships with one ignition sensor that initializes the Unicycle2D state variables.
sensor_models:
  ignition_sensor:
    type: fuse_models::Unicycle2DIgnition
    motion_models: [unicycle_motion_model]
    ignition: true
  odometry_sensor:
    type: fuse_models::Odometry2D
    motion_models: [unicycle_motion_model]
  imu_sensor:
    type: fuse_models::Imu2D
    motion_models: [unicycle_motion_model]

# And finally, define the set of publishers. fuse_models includes a robot_localization like publisher that publishes the latest state to the odom and tf topics.
publishers:
  odom_publisher:
    type: fuse_models::Odometry2DPublisher

# That is it for the optimizer itself, but each plugin has its own set of configuration. That configuration is specified in a sub-namespace of the plugin's name. E.g. ignition_sensor, odometry_sensor, imu_sensor
# These parameters are not strictly backwards-compatible with robot_localization configuration, but basically all of the same functionality exists.
unicycle_motion_model:
  buffer_length: 1.0  # How long to keep a local history. This should be at least as long as the smoother lag
  process_noise_diagonal: [0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01]  # These are in order {x, y, yaw, vx, vy, vyaw, ax, ay}. And note these are variances. The numerical values should be adjusted to the platform.

ignition_sensor:
  publish_on_startup: true  # Just start immediately. If false, it waits for a service to send it initial state values
  initial_state: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]  # The same order as the process noise above
  initial_sigma: [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]   # The same order as the process noise above. Note these are standard deviations. The numerical values should be adjusted to the platform.

odometry_sensor:
  topic: odom
  twist_target_frame: base_link
  linear_velocity_dimensions: ['x', 'y']
  angular_velocity_dimensions: ['yaw']
  # The above configuration only uses the odometry velocity information. There are several options for using the odometry pose information as well.

imu_sensor:
  topic: imu
  angular_velocity_dimensions: ['yaw']
  twist_target_frame: base_link
  # The above configuration only uses the yaw velocity of the IMU measurement. Depending on the IMU data available, additional measurement dimensions can be added

odom_publisher:
  topic: odom_smoothed
  base_link_frame_id: base_link
  odom_frame_id: odom
  map_frame_id: map
  world_frame_id: odom
  publish_tf: true
  # The odometry publisher also has several configuration modes, again very similar to the robot_localization package. The exact settings will depend on whether you are trying to fuse multiple dead reckoning sensors to make a better "odom" source, or if you are fusing in global state data and want to publish the map->odom transform.
gustavovelascoh commented 3 years ago

Thanks @svwilliams, I think that will be helpful as a starter point.

saimouli commented 3 years ago

@svwilliams would the robot_model package support fusing of VO (x,y, yaw) and encoder values (x,y, yaw)? Thanks