The fuse stack provides a general architecture for performing sensor fusion live on a robot. Some possible applications include state estimation, localization, mapping, and calibration.
EMRS should be treated as a floating platform. In this sense, the 3D omnidirectional motion model (with constant rate and constant linear acceleration) is correct to propagate its state over time. In this case, the rover state correspond to its 3D position, attitude, linear and angular velocities, linear acceleration.
EMRS is also ideally a rigid body. A rigid body movement in 3D space can be fully described if we know its instantaneous axis of rotation (IAOR) and the angular velocity vector directed along this axis.
What we could consider is to merge two contributions to create a new motion model:
first, we can get the angular velocity vector from the sensors readings. This comes from the transactions queue before being passed to the optimizer, which need to be connected by the motion model.
second, we could add to the robot base state its kinematic configuration (steer/shoulder angles and velocities) in order to estimate where the IAOR is at a specific time by geometric considerations.
Having these two, we can obtain the rover linear velocity vector and propagate the base state using the omnidirectional 3D motion model.
An idea could be to augment the robot state with only the 2D position of the IAOR, since this is already available from ICR computer. Specifically we could:
create a new sensor model which subscribes to the ICR position publisher and creates a new position constraints, relative to the base_link (since ICR computer publishes ICR position in base_link frame) --> this can be done using the same logic as tf2 transform listener, by subscribing to ICR position and adding its transformation wrt the base_link to a tf2 buffer.
expand the omnidirectional 3D motion model to accept the ICR position. If we have a buffer this is quite simple, we just have to lookup for the transform that we need at a specific timestamp (the one of state1). Inside the predict function, it should compute the linear velocity and, using the previous attitude, propagate the state during the dt.
TODO:
1) scale process noise based on ICR info
in some sense, this can already be done if we set scale_process_noise to true, since this will scale it using the updated state
if we want to consider how "good" is the estimate of icr position inside this, then we have to think something else
2) throttle ICR caching based on variation of position
This is achieved keeping the last transform of ICR in memory: this is updated only if the new position received will differ more then a tolerance. During lookup, we can end up in three situations:
ICR is not moving, so we can just read the last available transform
ICR is moving, so we will ask for a specific transform at that timestamp, which can be exactly found or be interpolated by tf
no ICR info is available, so we will proceed with the prediction step without adjusting the base state
EMRS should be treated as a floating platform. In this sense, the 3D omnidirectional motion model (with constant rate and constant linear acceleration) is correct to propagate its state over time. In this case, the rover state correspond to its 3D position, attitude, linear and angular velocities, linear acceleration.
EMRS is also ideally a rigid body. A rigid body movement in 3D space can be fully described if we know its instantaneous axis of rotation (IAOR) and the angular velocity vector directed along this axis.
What we could consider is to merge two contributions to create a new motion model:
An idea could be to augment the robot state with only the 2D position of the IAOR, since this is already available from ICR computer. Specifically we could:
TODO: 1) scale process noise based on ICR info
2) throttle ICR caching based on variation of position This is achieved keeping the last transform of ICR in memory: this is updated only if the new position received will differ more then a tolerance. During lookup, we can end up in three situations: