Closed jorgemiar closed 3 years ago
Hi @jorgemia
an advanced way of using the ZED2 with robot_localization
is performing a two stage pose estimation.
The first stage will estimate the odometry in odom_frame
as world_frame
fusing ONLY continuous position data (ZED odometry, wheels odometry, IMU data, etc).
The second stage will estimate the final pose in map_frame
as world_frame
using not continuous position data, like ZED pose. RTABmap pose, GPS, etc.
The odometry estimated in the first stage can be used as source for the second stage and the other position sources can be used as "fix" for the position.
@Myzhar Sounds like a good approach!
Can the ZED2 odometry be fused then even without Twist messages?
And is it worth it fusing the magnetometer data using imu filter madgwick for example? (The robot is being used outdoors in mostly flat terrain)
Twist information is a plus, if it is not available the Kalman Filter will estimate that info. Fusing the magnetometer data is a good feature to be added to get an absolute orientation, but remember to perform hard and soft calibration with the camera installed on your robot to reduce the magnetic field noises. Each robot has it's own magnetic fields to be compensated (iron case, motor magnets, power cables, etc), this is why we cannot provide a factory magnetometer calibration like we do for accelerometer and gyroscope biases. There are many nodes to perform magnetometer calibration in ROS (look at what Clearpath Robotics did for example), you can choose the one that better fits your requirements.
Ok, thanks for the great support as always!
@Myzhar reopening because I had some related questions.
I'm a bit confused about the tf frames when using the r_l package. The zed launch files and urdf allow you to transform the zed data to the base link frame. As discussed in #447 , the robot_localization is also going to produce this transform right?
From the r_l docs:
nav_msgs/Odometry - All pose data (position and orientation) is transformed from the message header’s frame_id into the coordinate frame specified by the world_frame parameter (typically map or odom). In the message itself, this specifically refers to everything contained within the pose property. All twist data (linear and angular velocity) is transformed from the child_frame_id of the message into the coordinate frame specified by the base_link_frame parameter (typically base_link).
So for the odom data:
header:
seq: 3191
stamp:
secs: 1596714390
nsecs: 244690047
frame_id: "odom"
child_frame_id: "base_link"
pose:
pose:
position:
x: -4.40967255923
y: -5.80727780841
z: 3.24955665729
orientation:
x: 0.0794586434868
y: 0.993327849592
z: -0.0236118676296
w: -0.0801784691055
covariance: [1.2646474090161064e-07, -1.4660645319963805e-07, -1.2479400623988113e-08, 5.393557600541499e-09, -4.266585307277637e-09, 6.537544550155872e-08, -1.4660619740425318e-07, 3.0835786901661777e-07, 5.185518148209667e-08, -4.6811283738179554e-09, 2.6498213401282555e-08, -1.2536132487639406e-07, -1.247940684123705e-08, 5.18552596417976e-08, 9.39644593245248e-08, 1.166117336026673e-08, 5.250183932048458e-08, -2.266321885713296e-08, 5.393556712363079e-09, -4.681134591066893e-09, 1.1661189347478285e-08, 1.8647678956540403e-08, 1.2724227005378452e-08, 7.630074905229378e-10, -4.266553332854528e-09, 2.6498181426859446e-08, 5.2501846425911936e-08, 1.2724216347237416e-08, 4.04720630342581e-08, -1.2433320151217231e-08, 6.537537444728514e-08, -1.253613675089582e-07, -2.2663179777282494e-08, 7.6301209794849e-10, -1.2433329033001428e-08, 5.6301487916243786e-08]
twist:
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
---
it would transform the pose data from odom to odom frame and if there was twist data it would transform it from base link to base link. So technically I already have the data in the right frames and robot localization shouldn't need to transform anything? Will this therefore cause a clash? What would be the right way to proceed?
Also, no transform is being applied by the zed ros wrapper on the imu right? Should this be defined in the URDF as a static transform between base link and "zed2_imu_link"?
Hi @jorgemia
only one node must publish the odom
frame, so you must be sure that publish_tf
in the ZED ROS Wrapper is set to false in case you are using an external node to fuse all odometries.
On the other side the robot_localization
node does not produce the transform from zed_camera_center
to base_link
, it's the ZED node that generates it internally.
I think you made a bit of confusion and don't worry, that's normal when you face a robot TF tree for the first time.
A document that helps a lot to understand how things work is the REP105 together with the REP103.
The ZED ROS Wrapper is designed to respect the conventions described in both the documents.
@Myzhar yes I think I'm slightly confused with frames, topics, transforms...😅
So I just have to disable publish_tf
in the zed wrapper, define the transform between the zed camera and base_link in the zed node/my urdf and feed the visual odometry topic to the robot localization node, with the config looking something like below?
odom_frame: odom
base_link_frame: base_link
world_frame: odom
two_d_mode: false
frequency: 50
odom0: navvy_velocity_controller/odom
odom0_config: [false, false, false,
false, false, false,
true, true, true,
false, false, true,
false, false, false]
odom1: zed2/zed_node/odom
odom1_config: [true, true, true,
true, true, true,
false, false, false,
false, false, false,
false, false, false]
You mention:
On the other side the robot_localization node does not produce the transform from zed_camera_center to base_link, it's the ZED node that generates it internally.
What does this mean then (from here)?
The state estimation nodes of robot_localization produce a state estimate whose pose is given in the map or odom frame and whose velocity is given in the base_link frame. All incoming data is transformed into one of these coordinate frames before being fused with the state.
And on a similar note, if I were to add the zed imu to r_l, I would also have to define a transform between the zed imu and base link in my urdf?
(Sorry about all these r_l related questions)
To answer myself:
So I just have to disable publish_tf in the zed wrapper, define the transform between the zed camera and base_link in the zed node/my URDF and feed the visual odometry topic to the robot localization node, with the config looking something like below?
Yes. Define static transforms in URDF (Relationship between zed camera centre and base link). Disable publish_tf for ZED camera as this TF will be published by robot localization after fusing different sensors. Subscribe to zed odom topic in robot localization and robot localization will publish the odom -> base_link transform. (Not sure about this but I think the zed odom topic publishes pose info in the odom frame and twist info in the child frame eg. base link. (Although the zed odom topic does not currently contain Twist info.) If your odometry message is not already in these frames, robot_localization would transform the data to these, to be able to fuse with other sensor's data.)
And on a similar note, if I were to add the zed imu to r_l, I would also have to define a transform between the zed imu and base link in my URDF?
Yes, you can add it to the URDF w.r.t the left camera frame and disable publish_imu_tf in the zed2.yaml config file.
<joint name="${ camera_name}_imu_link_joint" type="fixed">
<parent link="${ camera_name}_left_camera_frame"/>
<child link="${ camera_name}_imu_link"/>
<origin xyz="-0.002 -0.023 -0.002" rpy="-0.00125679082703 -0.00299259042367 -0.00718557741493" />
</joint>
<link name="${ camera_name}_imu_link" />
@Myzhar Let me know if anything I said is incorrect.
I am reopening because I have an issue with a similar topic. My goal is to fuse the ZED (zed2) localization with GPS sensor, using robot_localization EKF. Before I am adding the GPS, I wanted to see that I can reproduce ZED performance with the EKF. So I ran two EKF nodes, one that uses the ZED IMU + visual odometry, and one that uses ZED IMU + visual odometry+ zed pose with covariance. The result was that base_link started to drift quickly into space.
my launch file is:
<launch>
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_se_local" clear_params="true">
<rosparam command="load" file="$(find robot_localization)/params/local_ekf_template.yaml" />
<remap from="odometry/filtered" to="odometry/filtered/local"/>
</node>
<node pkg="robot_localization" type="ekf_localization_node" name="ekf_se_global" clear_params="true">
<rosparam command="load" file="$(find robot_localization)/params/global_ekf_template.yaml" />
<remap from="odometry/filtered" to="odometry/filtered/global"/>
</node>
</launch>
and the parameter files:
#local_ekf_template.yaml
frequency: 30
silent_tf_failure: false
sensor_timeout: 0.1
two_d_mode: false
transform_time_offset: 0.0
transform_timeout: 0.0
print_diagnostics: true
debug: false
debug_out_file: /path/to/debug/file.txt
publish_tf: true
publish_acceleration: false
permit_corrected_publication: false
map_frame: map # Defaults to "map" if unspecified
odom_frame: odom # Defaults to "odom" if unspecified
base_link_frame: base_link # Defaults to "base_link" if unspecified
world_frame: odom # Defaults to the value of odom_frame if unspecified
odom0: zed2/zed_node/odom
odom0_config: [true, true, true,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
odom0_queue_size: 2
odom0_nodelay: false
odom0_differential: false
odom0_relative: false
#odom0_pose_rejection_threshold: 5
#odom0_twist_rejection_threshold: 1
imu0: zed2/zed_node/imu/data
imu0_config: [false, false, false,
true, true, true,
false, false, false,
true, true, true,
true, true, true]
imu0_nodelay: false
imu0_differential: false
imu0_relative: true
imu0_queue_size: 5
#imu0_pose_rejection_threshold: 0.8 # Note the difference in parameter names
#imu0_twist_rejection_threshold: 0.8 #
#imu0_linear_acceleration_rejection_threshold: 0.8 #
imu0_remove_gravitational_acceleration: true
use_control: true
stamped_control: false
control_timeout: 0.2
control_config: [true, false, false, false, false, true]
acceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 3.4]
deceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 4.5]
acceleration_gains: [0.8, 0.0, 0.0, 0.0, 0.0, 0.9]
deceleration_gains: [1.0, 0.0, 0.0, 0.0, 0.0, 1.0]
process_noise_covariance: [0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.015]
initial_estimate_covariance: [1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]
#global_ekf_template.yaml
frequency: 30
silent_tf_failure: false
sensor_timeout: 0.1
two_d_mode: false
transform_time_offset: 0.0
transform_timeout: 0.0
print_diagnostics: true
debug: false
debug_out_file: /path/to/debug/file.txt
publish_tf: true
publish_acceleration: false
permit_corrected_publication: false
map_frame: map # Defaults to "map" if unspecified
odom_frame: odom # Defaults to "odom" if unspecified
base_link_frame: base_link # Defaults to "base_link" if unspecified
world_frame: map # Defaults to the value of odom_frame if unspecified
odom0: zed2/zed_node/odom
odom0_config: [true, true, true,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
odom0_queue_size: 2
odom0_nodelay: false
odom0_differential: false
odom0_relative: false
#odom0_pose_rejection_threshold: 5
#odom0_twist_rejection_threshold: 1
pose0: zed2/zed_node/pose_with_covariance
pose0_config: [true, true, true,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
pose0_differential: true
pose0_relative: false
pose0_queue_size: 5
#pose0_rejection_threshold: 2 # Note the difference in parameter name
pose0_nodelay: false
imu0: zed2/zed_node/imu/data
imu0_config: [false, false, false,
true, true, true,
false, false, false,
true, true, true,
true, true, true]
imu0_nodelay: false
imu0_differential: false
imu0_relative: true
imu0_queue_size: 5
#imu0_pose_rejection_threshold: 0.8 # Note the difference in parameter names
#imu0_twist_rejection_threshold: 0.8 #
#imu0_linear_acceleration_rejection_threshold: 0.8 #
imu0_remove_gravitational_acceleration: true
use_control: true
stamped_control: false
control_timeout: 0.2
control_config: [true, false, false, false, false, true]
acceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 3.4]
deceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 4.5]
acceleration_gains: [0.8, 0.0, 0.0, 0.0, 0.0, 0.9]
deceleration_gains: [1.0, 0.0, 0.0, 0.0, 0.0, 1.0]
process_noise_covariance: [0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.015]
initial_estimate_covariance: [1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]
Can anyone see the issue?
Similar to #549 I'd like to fuse the ZED2 odometry, the ZED2 IMU data and the wheel odometry to hopefully estimate the robot's position better in an outdoor environment using robot_localization. As mentioned, the ZED2 odometry topic doesn't publish Twist values yet.
Sorry if this is more of a robot_localization question than a ZED related one, but what would be the best way of using the data available from the ZED camera (eg. Visual-Inertial Odom, IMU etc...) to improve robot_localization?
Side note, the robot_localization package states:
So maybe another option could be to use the
pose
topic from the camera as a absolute position source?