lardemua / atom

Calibration tools for multi-sensor, multi-modal robotic systems
GNU General Public License v3.0
250 stars 26 forks source link

Implementation of functionality to estimate N+1 transformation with N sensors #543

Closed JorgeFernandes-Git closed 1 year ago

JorgeFernandes-Git commented 1 year ago

Hi everyone.

How can we adapt ATOM to be capable of estimate a position of a link that doesn't belong directly to a sensor? What do I mean with that?

Watch this video first: https://www.youtube.com/watch?v=DbFU10SCOL8

Here, I was using the set initial estimate launch file generated by ATOM, to estimate the position of the manipulator arm w.r.t. the AGV. The sensor used was a camera on the end-effector of the manipulator (eye-on-hand). Here is the graph tree: e1_arm2agv_graph This didn't work: https://youtu.be/JhHzo6qumo0


Then I add a second sensor to the AGV, a camera, and perform the calibration of the two cameras. Here is the system: e4_DualRGB_1 Calibration graph: e4_summary tree This work just fine as expected: https://youtu.be/AqQp4M0psj4?t=44


Now I want to be able to perform both of those calibrations simultaneously. Being so, the aim is to estimate 3 transformations:

With just 2 camera sensor.


Btw the robot's name is Zau 😄

manuelgitgomes commented 1 year ago

Hello @JorgeFernandes-Git!

I'll merge the scripts and maybe rename it to link_to_frame?

Sure, good idea!

Do you have suggestions on how should be the output table? Should the script evaluate all collections?

The transformations don't move based on the collections, so once per dataset is enough.

miguelriemoliveira commented 1 year ago

I'll merge the scripts and maybe rename it to link_to_frame?

Hi @JorgeFernandes-Git and @manuelgitgomes ,

I don't really like that name. I mean, link and frame are more or less the same thing. I suggest ground_truth_evaluation. Sounds better? You have any other suggestions?

manuelgitgomes commented 1 year ago

I suggest ground_truth_evaluation. Sounds better? You have any other suggestions?

It does @miguelriemoliveira, but maybe I'll add something like ground_truth_frame_evaluation or something similar, to explicitly state that we are evaluating frames (or tfs).

miguelriemoliveira commented 1 year ago

It does @miguelriemoliveira, but maybe I'll add something like ground_truth_frame_evaluation or something similar, to explicitly state that we are evaluating frames (or tfs).

Right, even better.

JorgeFernandes-Git commented 1 year ago

Hi @miguelriemoliveira and @manuelgitgomes.

I merged the scripts. This is the result: https://github.com/lardemua/atom/blob/JorgeFernandes-Git/issue543/atom_evaluation/scripts/ground_truth_frame_evaluation

Usage:

usage: ground_truth_frame_evaluation [-h] -train_json TRAIN_JSON_FILE -test_json TEST_JSON_FILE [-tf TARGET_FRAME] [-sf SOURCE_FRAME] [-sfr SAVE_FILE_RESULTS]

optional arguments:
  -h, --help            show this help message and exit
  -train_json TRAIN_JSON_FILE, --train_json_file TRAIN_JSON_FILE
                        Json file containing train input dataset.
  -test_json TEST_JSON_FILE, --test_json_file TEST_JSON_FILE
                        Json file containing test input dataset.
  -tf TARGET_FRAME, --target_frame TARGET_FRAME
                        Target transformation frame.
  -sf SOURCE_FRAME, --source_frame SOURCE_FRAME
                        Source transformation frame. If no frame is provided, computes all estimated frames.
  -sfr SAVE_FILE_RESULTS, --save_file_results SAVE_FILE_RESULTS
                        Output folder to where the results will be stored.

Examples:

rosrun atom_evaluation ground_truth_frame_evaluation -train_json ~/atom_calibration.json -test_json ~/dataset.json

Output:

frame # Xcal-Xgt (mm) Ycal-Ygt (mm) Zcal-Zgt (mm) Roll_cal-Roll_gt (deg) Pitch_cal-Pitch_gt (deg) Yaw_cal-Yaw_gt (deg) Average - Trans Average - Rot
base_link_mb_to_base_link 4.3803 3.3343 5.2759 0.0157 0.0895 0.0895 7.6250 0.1413
camera 0.7227 26.6899 0.7237 0.0238 0.0910 0.0910 31.0360 0.0990
camera_mb 2.8538 23.4396 0.9044 0.0528 0.1254 0.1254 23.6300 0.2852

Full args:

rosrun atom_evaluation ground_truth_frame_evaluation -train_json ~/atom_calibration.json -test_json ~/dataset.json -tf base_link -sf camera -sfr ~/Desktop

Output:

frame # Xcal-Xgt (mm) Ycal-Ygt (mm) Zcal-Zgt (mm) Roll_cal-Roll_gt (deg) Pitch_cal-Pitch_gt (deg) Yaw_cal-Yaw_gt (deg) Average - Trans Average - Rot
camera 0.7227 26.6899 0.7237 0.0238 0.0910 0.0910 26.7095 0.1824

It's what you had in mind?

miguelriemoliveira commented 1 year ago

Hi @JorgeFernandes-Git ,

The averages dont make sense. Take a look at the base_link_mb_to_base_link

JorgeFernandes-Git commented 1 year ago

Hi @miguelriemoliveira.

The average is this part:

https://github.com/lardemua/atom/blob/9921a3028f3c903423c54a71c1c96614f1c4bab4/atom_evaluation/scripts/ground_truth_frame_evaluation#L87-L94

Probably should have a different name.

miguelriemoliveira commented 1 year ago

Hi @JorgeFernandes-Git ,

but my problem is that the average is having strange values that (I think) did not occur before For example:

base_link_mb_to_base_link | 4.3803 | 3.3343 | 5.2759 | 0.0157 | 0.0895 | 0.0895 | 7.6250 | 0.1413

the average is 7.6? That's strange ...

Also the results for camera and camera_mb sound strange.

Did you change something? Can you run @manuelgitgomes 's old code and see if the average is the same as with your code?

manuelgitgomes commented 1 year ago

Hello @miguelriemoliveira and @JorgeFernandes-Git.

I think this misunderstanding might be because of my poor explanation.

It does not gives the average. It instead gives the value of the translation and rotation vectors.

Imagine you have a translation vector of (4.3803, 3.3342, 5.2759). The absolute value of this vector is 7.6250. I argue this paints a better picture of the result, as opposed to the average between all values.

JorgeFernandes-Git commented 1 year ago

Hi @manuelgitgomes and @miguelriemoliveira .

I change the name to Trans and Rot. Average didn't make sense.

Comparing both scripts with same datasets, same target frame and same source frame:

Notice that this results are from base_link to camera, in which the position is changing between collections.

If the frames change, the results change. E.g. base_footprint to base_link is equal in all collections.

sensor_to_frame_evaluation

rosrun atom_evaluation sensor_to_frame_evaluation -train_json ~/atom_calibration_nig_0.1_0.1.json -test_json ~/dataset.json -tf base_link -ss camera
Collection # Trans (mm) Rot (deg)
000 22.3480 0.0986

. . . | 079 | 338.9718 | 123.6769 | | Averages | 767.2433 | 109.9706 |


ground_truth_frame_evaluation

rosrun atom_evaluation ground_truth_frame_evaluation -train_json ~/atom_calibration_nig_0.1_0.1.json -test_json ~/dataset.json -tf base_link -sf camera
frame # Xcal-Xgt (mm) Ycal-Ygt (mm) Zcal-Zgt (mm) Roll_cal-Roll_gt (deg) Pitch_cal-Pitch_gt (deg) Yaw_cal-Yaw_gt (deg) Trans (mm) Rot (deg)
camera 0.8180 22.2847 1.4673 0.0142 0.0315 0.0315 767.2433 109.9706
JorgeFernandes-Git commented 1 year ago

Ground truth frame evaluation discussion continued on #553.

miguelriemoliveira commented 1 year ago

Hi @JorgeFernandes-Git ,

all the things you have done are in your fork:

https://github.com/lardemua/atom/tree/JorgeFernandes-Git/issue543

right? We should reserve some time to send these improvements to the main branch.

JorgeFernandes-Git commented 1 year ago

I suggest we arrange a meeting to tackle it as soon as possible, while it's still fresh.

If you're not available, I can work on it with @manuelgitgomes when he's free. Let me know your thoughts on this.

miguelriemoliveira commented 1 year ago

This is operational and integrated into the main branch.