Open JorgeFernandes-Git opened 1 year ago
World link: https://github.com/JorgeFernandes-Git/zau_bot/blob/9610f0113a8d5f74e49c7f279b86044592a83c79/e3_rgb2agv/e3_rgb2agv_calibration/calibration/config.yml#L40 Pattern parent link (fixed): https://github.com/JorgeFernandes-Git/zau_bot/blob/9610f0113a8d5f74e49c7f279b86044592a83c79/e3_rgb2agv/e3_rgb2agv_calibration/calibration/config.yml#L94
rosrun atom_calibration configure_calibration_pkg -n e3_rgb2agv_calibration -utf
Hi @miguelriemoliveira and @manuelgitgomes.
Camera to AGV calibration went very well. I organize the main information and made some videos for you to check out. :+1:
Hello @JorgeFernandes-Git !
Congrats!
However, the last collections had a worse results. Can you explain this? Maybe a worse view of the pattern?
I notice the same, however the collections weren't that bad:
Collection 25:
Collection 26:
Collection 27:
To close perhaps? Or maybe the wheels weren't completely static when I collect the data. Should I try to collect new data and rerun the calibration? Tomorrow I'll make an evaluation similar to the last one.
Btw @manuelgitgomes, did you manage to use the node interactive_pattern
with this type of calibration? I know it is not needed, because the pattern wasn't supposed to move, but when I launched it I got an error on rviz saying:
Cannot get tf info for init message with sequence number 1. Error: "world" passed to lookupTransform argument source_frame does not exist.
I tried to change the world frame to odom frame in the script, but didn't work. It's not a problem at all, but it made me think. Long story short, I had to position the pattern on gazebo. :smile:
Hi @JorgeFernandes-Git ,
very nice work. Congrats. I am reading from top to bottom and have some comments:
I really enjoy the detailed description of what the this experiment aims to achieve. Of course that opens up the flank for a bunch of questions : - ). Here they go:
Q0
As I said, I really enjoy the description of each experiment. There is one thing missing so that the experiment can be replayed by others: links to the bagfiles or at least to the dataset files.
Q1 About the selection of the transformation to be estimated: I would select another transformation, the camera_mb_link to the camera_mb_rgb_frame.
The reason for this is that the one you selected works fine but this way you wont be able to also calibrate the depth camera if needed, because the transformation you selected affects both the rgb and the depth camera.
Another question would be why do I choose camera_mb_link to the camera_mb_rgb_frame and not the camera_mb_rgb_frame to camera_mb_rgb_optical_frame. That answer is [here] (https://github.com/lardemua/atom/issues/498#issuecomment-1194820154).
If it possible to repeat the tests with this new configuration, great, but I would say its not mandatory ...
Q2
About some collections being clearly worse that others. There are two solutions.
The long road approach is to use the dataset playback functionality to verify if all the labels are correct. My guess is that some of them should be incorrect. Try to see how the labels are.
The shortcut approach is to filter out these collections that have too much error in a new calibration run. You can use the --collection_selection_function for that.
Q4
To close perhaps? Or maybe the wheels weren't completely static when I collect the data.
Are you considering this What is a good bagfile for recording an ATOM dataset? that you can read here
Q5
Btw @manuelgitgomes, did you manage to use the node interactive_pattern with this type of calibration? I know it is not needed, because the pattern wasn't supposed to move, but when I launched it I got an error on rviz saying:
I suggest this goes into a new issue in the atom repository ... maybe helpful for others.
Hi @miguelriemoliveira,
Q0
As I said, I really enjoy the description of each experiment. There is one thing missing so that the experiment can be replayed by others: links to the bagfiles or at least to the dataset files.
I will add the bags. Not really sure how? Any suggestions?
Q1
About the selection of the transformation to be estimated: I would select another transformation, the camera_mb_link to the camera_mb_rgb_frame.
My bad. We'd already talked about this topic, and I did it again. I'll keep this in mind.
Q2
About some collections being clearly worse that others. There are two solutions. The long road approach is to use the dataset playback functionality to verify if all the labels are correct. My guess is that some of them should be incorrect. Try to see how the labels are. The shortcut approach is to filter out these collections that have too much error in a new calibration run. You can use the --collection_selection_function for that.
I'll see what I can do on this to get better results. I'll record a new bag now that I saw the worst collections.
Q0
As I said, I really enjoy the description of each experiment. There is one thing missing so that the experiment can be replayed by others: links to the bagfiles or at least to the dataset files.
I will add the bags. Not really shore how? Any suggestions?
Put them all in a google drive or one drive and add the link to them. More important than the bagfiles are the datasets.
Q2
About some collections being clearly worse that others. There are two solutions. The long road approach is to use the dataset playback functionality to verify if all the labels are correct. My guess is that some of them should be incorrect. Try to see how the labels are. The shortcut approach is to filter out these collections that have too much error in a new calibration run. You can use the --collection_selection_function for that.
I'll see what I can do on this to get better results. I'll record a new bag now that I saw the worst collections.
When you see a bad collection the first thing to try is the short road I told you about, i.e., omitting that(those) collection(s) from the calibration.
The second thing is to create a new dataset (not a bag file).
Creating a bagfile is only if you think that the bag you have has some problem...
Calibration of an RGB astra camera mounted on the AGV (sensor in motion calibration). (Successful)
https://github.com/JorgeFernandes-Git/zau_bot/tree/main/e3_rgb2agv
Summary:
Launch optimized URDF:
Evaluate two datasets: ...
Calibration Results per collection:
Calibration tree and Transformation RGB to AGV:
Videos:
Recording bag file: https://youtu.be/q7h5tL1suVE Playback dataset: https://youtu.be/sh7AZn0dUgA Run calibration: https://youtu.be/tBw2jvTYlb4 Evaluation procedure:
Configure the calibration using the transformations in the bagfile instead of the ones produced by the xacro description:
Commands:
Launch AGV for calibration:
Launch teleop to control the AGV through the keyboard:
Record bag file:
Playback dataset for calibration (2 terminals):
Run calibration (2 terminals):