JorgeFernandes-Git / zau_bot

4 stars 1 forks source link

e3 - RGB camera to AGV calibration #6

Open JorgeFernandes-Git opened 1 year ago

JorgeFernandes-Git commented 1 year ago

Calibration of an RGB astra camera mounted on the AGV (sensor in motion calibration). (Successful)

https://github.com/JorgeFernandes-Git/zau_bot/tree/main/e3_rgb2agv


Summary:

Launch optimized URDF:

roslaunch e3_rgb2agv_optimized e3_rgb2agv_optimized.launch

Evaluate two datasets: ...


Calibration Results per collection:

Collection Camera (px)
000 1.3215
001 0.5251
002 2.0025
003 1.4960
004 5.3229
005 1.0206
006 1.1486
007 1.9128
008 0.9098
009 0.7199
010 1.1669
011 0.5040
012 2.3567
013 0.3553
014 1.1806
015 1.5711
016 1.2563
017 0.9612
018 1.8731
019 1.0544
020 1.7724
021 2.5432
022 2.3654
023 2.2581
025 7.7681
026 15.2750
027 3.4709
Averages 2.3745

Calibration tree and Transformation RGB to AGV:

summary tree e3_rgb2agv e3_rgb2agv_2


Videos:

Recording bag file: https://youtu.be/q7h5tL1suVE Playback dataset: https://youtu.be/sh7AZn0dUgA Run calibration: https://youtu.be/tBw2jvTYlb4 Evaluation procedure:


Configure the calibration using the transformations in the bagfile instead of the ones produced by the xacro description:

rosrun atom_calibration configure_calibration_pkg -n e3_rgb2agv_calibration -utf

Commands:

Launch AGV for calibration:

roslaunch zau_bot_bringup moving_base_odom.launch calibration_pattern:=true

Launch teleop to control the AGV through the keyboard:

roslaunch zau_bot_bringup my_teleop.launch 

Record bag file:

roslaunch e3_rgb2agv_record_bag record_sensor_data.launch

Playback dataset for calibration (2 terminals):

roslaunch e3_rgb2agv_calibration dataset_playback.launch

rosrun atom_calibration dataset_playback -json /home/jorge/datasets/e3_rgb2agv/dataset.json -ow

Run calibration (2 terminals):

roslaunch e0_rgb2ee_calibration calibrate.launch

rosrun atom_calibration calibrate -json $ATOM_DATASETS/e3_rgb2agv/dataset.json -v -rv -si -vo
JorgeFernandes-Git commented 1 year ago

Main considerations for this calibration:

Topics on ros bag (pattern is fix, robot is moving):

https://github.com/JorgeFernandes-Git/zau_bot/blob/9610f0113a8d5f74e49c7f279b86044592a83c79/e3_rgb2agv/e3_rgb2agv_record_bag/launch/record_sensor_data.launch#L26-L34

Config yaml file:

World link: https://github.com/JorgeFernandes-Git/zau_bot/blob/9610f0113a8d5f74e49c7f279b86044592a83c79/e3_rgb2agv/e3_rgb2agv_calibration/calibration/config.yml#L40 Pattern parent link (fixed): https://github.com/JorgeFernandes-Git/zau_bot/blob/9610f0113a8d5f74e49c7f279b86044592a83c79/e3_rgb2agv/e3_rgb2agv_calibration/calibration/config.yml#L94

Configure the calibration using the transformations in the bagfile instead of the ones produced by the xacro description:

rosrun atom_calibration configure_calibration_pkg -n e3_rgb2agv_calibration -utf
JorgeFernandes-Git commented 1 year ago

Hi @miguelriemoliveira and @manuelgitgomes.

Camera to AGV calibration went very well. I organize the main information and made some videos for you to check out. :+1:

manuelgitgomes commented 1 year ago

Hello @JorgeFernandes-Git !

Congrats!

However, the last collections had a worse results. Can you explain this? Maybe a worse view of the pattern?

JorgeFernandes-Git commented 1 year ago

I notice the same, however the collections weren't that bad:

Collection 25: camera_mb_025

Collection 26: camera_mb_026

Collection 27: camera_mb_027

To close perhaps? Or maybe the wheels weren't completely static when I collect the data. Should I try to collect new data and rerun the calibration? Tomorrow I'll make an evaluation similar to the last one.

JorgeFernandes-Git commented 1 year ago

Btw @manuelgitgomes, did you manage to use the node interactive_pattern with this type of calibration? I know it is not needed, because the pattern wasn't supposed to move, but when I launched it I got an error on rviz saying:

Cannot get tf info for init message with sequence number 1. Error: "world" passed to lookupTransform argument source_frame does not exist.

I tried to change the world frame to odom frame in the script, but didn't work. It's not a problem at all, but it made me think. Long story short, I had to position the pattern on gazebo. :smile:

miguelriemoliveira commented 1 year ago

Hi @JorgeFernandes-Git ,

very nice work. Congrats. I am reading from top to bottom and have some comments:

I really enjoy the detailed description of what the this experiment aims to achieve. Of course that opens up the flank for a bunch of questions : - ). Here they go:

Q0

As I said, I really enjoy the description of each experiment. There is one thing missing so that the experiment can be replayed by others: links to the bagfiles or at least to the dataset files.

Q1 About the selection of the transformation to be estimated: I would select another transformation, the camera_mb_link to the camera_mb_rgb_frame.

The reason for this is that the one you selected works fine but this way you wont be able to also calibrate the depth camera if needed, because the transformation you selected affects both the rgb and the depth camera.

Another question would be why do I choose camera_mb_link to the camera_mb_rgb_frame and not the camera_mb_rgb_frame to camera_mb_rgb_optical_frame. That answer is [here] (https://github.com/lardemua/atom/issues/498#issuecomment-1194820154).

If it possible to repeat the tests with this new configuration, great, but I would say its not mandatory ...

Q2

About some collections being clearly worse that others. There are two solutions.

The long road approach is to use the dataset playback functionality to verify if all the labels are correct. My guess is that some of them should be incorrect. Try to see how the labels are.

The shortcut approach is to filter out these collections that have too much error in a new calibration run. You can use the --collection_selection_function for that.

Q4

To close perhaps? Or maybe the wheels weren't completely static when I collect the data.

Are you considering this What is a good bagfile for recording an ATOM dataset? that you can read here

Q5

Btw @manuelgitgomes, did you manage to use the node interactive_pattern with this type of calibration? I know it is not needed, because the pattern wasn't supposed to move, but when I launched it I got an error on rviz saying:

I suggest this goes into a new issue in the atom repository ... maybe helpful for others.

JorgeFernandes-Git commented 1 year ago

Hi @miguelriemoliveira,

Q0

As I said, I really enjoy the description of each experiment. There is one thing missing so that the experiment can be replayed by others: links to the bagfiles or at least to the dataset files.

I will add the bags. Not really sure how? Any suggestions?

Q1

About the selection of the transformation to be estimated: I would select another transformation, the camera_mb_link to the camera_mb_rgb_frame.

My bad. We'd already talked about this topic, and I did it again. I'll keep this in mind.

Q2

About some collections being clearly worse that others. There are two solutions. The long road approach is to use the dataset playback functionality to verify if all the labels are correct. My guess is that some of them should be incorrect. Try to see how the labels are. The shortcut approach is to filter out these collections that have too much error in a new calibration run. You can use the --collection_selection_function for that.

I'll see what I can do on this to get better results. I'll record a new bag now that I saw the worst collections.

miguelriemoliveira commented 1 year ago

Q0

As I said, I really enjoy the description of each experiment. There is one thing missing so that the experiment can be replayed by others: links to the bagfiles or at least to the dataset files.

I will add the bags. Not really shore how? Any suggestions?

Put them all in a google drive or one drive and add the link to them. More important than the bagfiles are the datasets.

Q2

About some collections being clearly worse that others. There are two solutions. The long road approach is to use the dataset playback functionality to verify if all the labels are correct. My guess is that some of them should be incorrect. Try to see how the labels are. The shortcut approach is to filter out these collections that have too much error in a new calibration run. You can use the --collection_selection_function for that.

I'll see what I can do on this to get better results. I'll record a new bag now that I saw the worst collections.

When you see a bad collection the first thing to try is the short road I told you about, i.e., omitting that(those) collection(s) from the calibration.

The second thing is to create a new dataset (not a bag file).

Creating a bagfile is only if you think that the bag you have has some problem...