cra-ros-pkg / robot_localization

robot_localization is a package of nonlinear state estimation nodes. The package was developed by Charles River Analytics, Inc. Please ask questions on answers.ros.org.
http://www.cra.com
Other
1.38k stars 890 forks source link

ekf_localization_node frame clarification #165

Closed jim-rothrock closed 9 years ago

jim-rothrock commented 9 years ago

I'm writing visual odometry and solar compass nodes that provide input to ekf_localization_node, and I'd like to get some clarification regarding frames. My understanding of ekf_localization_node's use of frames is described below. Please let me know if there are any errors.

nav_msgs/Odometry _odommsg.pose specifies the transform from _ to _. _odommsg.twist specifies angular velocities and linear accelerations in __.

Does ekf_localizationnode find and apply the transform from _ to the frame specified by the _base_linkframe parameter, or does ekf_localization_node act as if _odom_msg.child_frameid is the same as _base_linkframe, even when it isn't? My visual odometry node sets _odom_msg.header.frameid to "odom" and _odom_msg.child_frameid to "top_mast_camera_optical", which is translated and rotated relative to "base_link".

sensor_msgs/Imu After reading https://github.com/cra-ros-pkg/robot_localization/issues/22, this is my understanding. _imumsg.orientation specifies the orientation transform from the ENU frame to __. _imu_msg.angularvelocity specifies angular velocities in __. _imu_msg.linearacceleration specifies linear accelerations in __.

geometry_msgs/PoseWithCovarianceStamped _posemsg.pose specifies the transform from __ to the frame specified by ekf_localization_node's _base_linkframe parameter.

I was using this message type to publish the orientation transform from the ENU frame (e.g. "map", assuming that the map frame and the ENU frame are identical) to the solar compass camera's optical frame, but discovered that this won't work with ekf_localization_node unless _posemsg.pose includes the transform from the optical frame to "base_link". If ekf_localization_node handles nav_msgs/Odometry and sensor_msgs/Imu messages in the way that I think it does, I can use one of those instead of geometry_msgs/PoseWithCovarianceStamped.

geometry_msgs/TwistWithCovarianceStamped _twistmsg.twist specifies angular velocities and linear accelerations in __.

ayrton04 commented 9 years ago

Let me back up and go over the relevant coordinate frames used by _ekf_localizationnode, and then address each sensor in turn.

I base pretty much everything on REP-105. Specifically, I work on the assumption of three principal coordinate frames:

(a) _baselink - The coordinate frame that is rigidly attached to the robot (the "body frame" of the robot) (b) odom - A world-fixed frame whose origin is the robot's start position and orientation. The robot's position in this frame will be continuous, but may drift over time. (c) map - A world-fixed frame whose origin is the robot's start position and orientation. It is exactly like the odom frame, except that it is assumed that data within the map frame will not drift.

Now, the purpose of _ekf_localizationnode is to estimate the robot's pose in one of the world-fixed frames, along with its velocity in the _baselink frame. It also produces the transform from the world-fixed frame to the _baselink frame. This transform is identical to the pose estimate in the world-fixed frame. Three things:

  1. The names of all three frames are configurable for _ekf_localizationnode. You can you use the _base_linkframe, _odomframe, and _mapframe parameters to set the values for the frames in (a), (b), and (c) above, respectively. They default to "base_link," "odom," and "map", respectively.
  2. The _worldframe parameter specifies in which coordinate frame you want to estimate your position, i.e., whether you want to compute a map->_baselink transform or an odom->_baselink transform. Most people use odom unless they are fusing absolute pose data, e.g., GPS. The value of the _worldframe parameter must match either the _odomframe or the _mapframe parameter.
  3. When specifying these parameters, keep in mind that all twist (velocity) data (linear and rotational velocity, as well as linear acceleration) will be transformed into the frame specified by the _base_linkframe parameter, and THEN fused with the filter. Likewise, with the exception of IMU data (more on that in a second), all pose data (position and orientation) will be transformed into the frame specified by the _worldframe parameter , and then fused with the filter.

Now, to answer your questions about each sensor input type:

nav_msgs/Odometry All odometry messages contain a pose object and a twist (velocity) object. The pose data is reported in the coordinate frame given by the _frameid of the message, and the twist data is reported in the coordinate frame of the message's _child_frameid. Internally, the message gets broken up and treated as one PoseWithCovarianceStamped message and one TwistWithCovarianceStamped message. Those messages are then passed on to the same methods as we use for standard PoseWithCovarianceStamped and TwistWithCovarianceStamped messages.

geometry_msgs/PoseWithCovarianceStamped All data in this message will be transformed into the frame specified by the _worldframe parameter, and then fused with the filter.

geometry_msgs/TwistWithCovarianceStamped All data in this message will be transformed into the frame specified by the _base_linkframe parameter, and then fused with the filter.

sensor_msgs/Imu This one is messy, and a lot of it has to do with the way the IMU message was originally defined. A re-work of the message is in progress at the ROS level, but in the meantime, I had to make some assumptions about the data, some of which may make things difficult. All data in the IMU is specified by the IMU's frame_id parameter. This is an issue, as the IMU will most likely report orientation in a world-fixed frame (specified by the gravity vector and the vector pointing to magnetic north), whereas the angular velocity and linear acceleration data is reported in the sensor's body frame. Unfortunately, we only have one _frameid specified in the IMU message, so we have to make an assumption. For all angular velocity and linear acceleration data, things work just fine: we transform all the data into the frame specified by the _base_linkframe parameter, and fuse it with the filter. Orientation data gets more complicated. For _ekf_localizationnode, all orientation data is transformed from the _frameid of the IMU message into frame specified by the _base_linkframe parameter, but it is then fused as if it is in the world-fixed frame. I make an assumption that all orientation data is in an ENU frame, not NED, so you'll have to correct that data before it comes in. Problematically, handling the transformation this way does not correctly handle IMUs that are mounted in a non-neutral orientation, e.g., upside-down or rotated 90 degrees. I need to address this, but it will be easier to do once the new IMU specification is in place. See this discussion for more information.

Note that for all message types, if your data is in some frame other than the frames specified by your _worldframe and _base_linkframe parameters, you'll need a transform to the frames in question if you want to fuse that data.

jim-rothrock commented 9 years ago

Thanks for the detailed reply. I'll modify the visual odometry and solar compass nodes to publish transforms from _worldframe to _base_linkframe.

  1. When specifying these parameters, keep in mind that all twist (velocity) data...

Point 3 and the description of sensor_msgs/Imu handling merit inclusion on the robot_localization wiki page, or their own "robot_localization and frames" page. They provide all the information needed by someone who is writing code that is intended to work with robot_localization.

ayrton04 commented 9 years ago

Yes, I agree about the wiki. The entire thing needs some work. Just need to find the time!

One quick note, though: ekf_localization_node is responsible for publishing the transform from the _worldframe to the _base_linkframe, as that transform equates to the robot's position in the _worldframe. If another node publishes the same transform, you'll quickly run into trouble for any node that consumes that transform. What you'll want to make sure of is that you have transforms that ensure that your sensor data can be transformed from whatever frame it's reported in to the _worldframe or _base_linkframe.

positron96 commented 6 years ago

Sorry to bring this issue up but I hope this is a right place to ask.

I am trying to use ekf_localization_node to create a odom -> base_footprint tf transform. For this I need other nodes (that capture raw odometry) to not publish this tf transform. But from these nodes I need various topics of types Odometry, Pose etc. And data in these topics must have a frame associated with them. The nodes providing these topics (gazebo right now, but I'll try to use turtlebot encoders, turtlebot gyroscope and visual odometry) set frame_id to odom there. Is this a correct setup? Do I need to set some other frame there? What are best practices for this? Won't these nodes mess up data when the odom transform is published by ekf_localization but not themselves?

Right now, my filtered odometry jumps around like crazy, while odometry from gazebo on which it is based is close to perfect.

ayrton04 commented 6 years ago

Thanks for the question. However, I would ask that you re-post the question on answers.ros.org. When you do, please be sure to include your full launch file/configuration, and a sample input message from each sensor input.

positron96 commented 6 years ago

Thank you! I will move to ansers.ros.org then.

thangvip4321 commented 4 years ago

@positron96 did you have your answer? Can you put the link to your question here? I'm also having the same problem.

positron96 commented 4 years ago

That would be this: https://answers.ros.org/question/283090/robot_localization-odom-topics-frame_ids-and-tfs/

But frankly, I don't remember whether I solved my problem or not.