Closed JorgeFernandes-Git closed 1 year ago
So far those changes had been implemented in a new branch of ATOM, and it's working for newer and old config.yml files:
Add addtional_tfs to config.yml
file template:
https://github.com/lardemua/atom/blob/47247129b8723e767395d5198e47f81f596a1723/atom_calibration/templates/config.yml#L93-L109
Add verifications in the configure_calibration_pkg
:
https://github.com/lardemua/atom/blob/47247129b8723e767395d5198e47f81f596a1723/atom_calibration/scripts/configure_calibration_pkg#L252-L293
Add if statement in config_io.py
to run old and new config.yml files:
https://github.com/lardemua/atom/blob/47247129b8723e767395d5198e47f81f596a1723/atom_core/src/atom_core/config_io.py#L136-L142
Add srv files for additional_tfs to atom_msgs:
Add modifications in set_initial_estimate
:
https://github.com/lardemua/atom/blob/47247129b8723e767395d5198e47f81f596a1723/atom_calibration/scripts/set_initial_estimate#L62-L65
https://github.com/lardemua/atom/blob/47247129b8723e767395d5198e47f81f596a1723/atom_calibration/scripts/set_initial_estimate#L77-L90
https://github.com/lardemua/atom/blob/47247129b8723e767395d5198e47f81f596a1723/atom_calibration/scripts/set_initial_estimate#L102-L106
https://github.com/lardemua/atom/blob/47247129b8723e767395d5198e47f81f596a1723/atom_calibration/scripts/set_initial_estimate#L102-L106
https://github.com/lardemua/atom/blob/47247129b8723e767395d5198e47f81f596a1723/atom_calibration/scripts/set_initial_estimate#L171-L189
https://github.com/lardemua/atom/blob/47247129b8723e767395d5198e47f81f596a1723/atom_calibration/scripts/set_initial_estimate#L197-L200
https://github.com/lardemua/atom/blob/47247129b8723e767395d5198e47f81f596a1723/atom_calibration/scripts/set_initial_estimate#L272-L274
Add script additional_tf.py
to control the marker on set_initial_estimate
.
Updated calibration tree:
Hi @miguelriemoliveira and @manuelgitgomes.
When running the calibration, it seems like the respective transformation isn't being optimized.
At the moment, the modifications in the calibrate
file are:
https://github.com/lardemua/atom/blob/1b031b4b60b90539ef274be9da3093d7e2e28ef2/atom_calibration/scripts/calibrate#L203-L211
But it seems like those modifications aren't enough. The output optimized xacro file have the same joint values as the input xacro file.
I made some prints to the terminal, as you may see in the code, and they seem right:
Initializing optimizer... Anchored sensor is Creating parameters ... PASSOU AQUI base_link_mb-base_link {'camera_link-camera_rgb_frame', 'camera_mb_link-camera_mb_rgb_frame', 'base_link_mb-base_link'} Creating residuals ... RNG Seed: 4183316948613496418 Computing sparse matrix ... Initializing optimization ... One optimizer iteration has 13 function calls. Starting optimization ...
My guess is that we have to add something after this. Do you have suggestions?
Hello @JorgeFernandes-Git!
Try to remove the comment in this line, run the code and show us the print:
This prints the sparse matrix, which is responsible to associate which parameters (tfs) can be changed to optimize the residuals (error from the error function of each sensors).
Suppose you have two cameras named camera_1
and camera_2
. Changing the transformation from camera_1
to base_link
does not affect camera_2
at all, but it obviously affects camera_1
The sparse matrix is a binary matrix (filled with 0 and 1) that 0 indicates no relation and 1 indicates relation.
This simple case's sparse matrix is as follows:
Seeing the sparse matrix allow us to see if ATOM recognizes the tf base_link_to_base_link_mb
as an influence to your cameras.
@manuelgitgomes the output is this:
Computing sparse matrix ... Sparsity matrix: camera_mb_link-camera_mb_rgb_frame_x ... odom-pattern_link_r3 c000_camera_corner22 0 ... 1 c000_camera_corner23 0 ... 1 c000_camera_corner24 0 ... 1 c000_camera_corner25 0 ... 1 c000_camera_corner26 0 ... 1 ... ... ... ... c079_camera_mb_corner80 1 ... 1 c079_camera_mb_corner81 1 ... 1 c079_camera_mb_corner82 1 ... 1 c079_camera_mb_corner83 1 ... 1 c079_camera_mb_corner84 1 ... 1 [12204 rows x 24 columns] Initializing optimization ... One optimizer iteration has 13 function calls.
Perhaps this help:
Maybe we need a nested for loop under this part:
To iterate the addtional transformations?
@JorgeFernandes-Git Ok, so no info on that print. Try to fully print/save the sparse matrix. The code for this is in here: https://github.com/lardemua/atom/blob/74a4628c8b8c8f63f078fa78ab477ffddef5dc2a/atom_core/src/atom_core/optimization_utils.py#L650-L655
It seems there is a conversion from the sparse matrix to a csv. Can you verify if it is saved? And if so, can you share it?
To iterate the addtional transformations?
I do not think so, this is to generate residuals, which only regards sensors.
Here it is @manuelgitgomes:
https://drive.google.com/file/d/1qxgYYsW2Azn1TDs-MjDb0xq8ZtQIPgFI/view?usp=sharing
Seems like zeros all over the place in translations and rotations from base_link_mb_to_base_link.
Seems like zeros all over the place in translations and rotations from base_link_mb_to_base_link.
@JorgeFernandes-Git After analyzing, it seems like so. Here is my analysis:
>>> matrix = pd.read_csv('sparse_matrix.csv')
>>> matrix['base_link_mb-base_link_x'].sum()
0
>>> matrix['base_link_mb-base_link_y'].sum()
0
>>> matrix['base_link_mb-base_link_z'].sum()
0
>>> matrix['base_link_mb-base_link_r1'].sum()
0
>>> matrix['base_link_mb-base_link_r2'].sum()
0
>>> matrix['base_link_mb-base_link_r3'].sum()
0
So we can say that the error is in the computation of the sparse matrix:
You have to debug in this function. We can say that the condition on the if clause in line 542 is not being fulfilled when it should.
If you need any further help, please add a comment here and I will try to help you.
As far as I understood after some debugging the self.residuals
doesn't have the params
:
base_link_mb-base_link_x base_link_mb-base_link_y base_link_mb-base_link_z base_link_mb-base_link_r1 base_link_mb-base_link_r2 base_link_mb-base_link_r3
As a result, the if statement isn't true for those params.
For self.residuals
to have those, the method pushResidual
must be called. But, as you said, this is only for sensors.
I think we must fund a different approach.
Let's see if @miguelriemoliveira has any idea.
Great catch @manuelgitgomes . I think that's the problem.
The sparse matrix is controlled when you add the residuals, as @JorgeFernandes-Git says. For each residual we have to say which parameters affect it.
We create a transform set to create parameters, but do not use it for the residuals.
Let me think about it tomorrow and suggest a solution ...
Jorge, to help I would need a print of the parameters and a print of the residuals.
Is the latest code pushed to the branch? How can I run in my computer?
Just got home now @miguelriemoliveira.
Dataset: https://drive.google.com/file/d/1PrGdk892LbXduyg-KvOf6bDnU08WtT70/view?usp=sharing
Run calibration:
roslaunch e5_dualrgb_arm2rgb_calibration calibrate.launch
rosrun atom_calibration calibrate -json $ATOM_DATASETS/e5_dualrgb_arm2rgb_v2/dataset.json -v -rv -si -vo
So I think @JorgeFernandes-Git was very close.
As I said before, for each residual we create a list of parameters that we know influence this residual. Then, when we push the residual we add that list of influencing parameters. This is how the sparse matrix is created.
We could create the list of influencing parameters by hand, but to make things easier I created a function to get all parameters matching a pattern.
In the calibrate script what we are doing is iterating each sensor and adding the transformation parameters that we have in the sensor yaml as influencing parameters to the residuals of this sensor. This was valid only when all the optimized transforms were defined in the sensors. Actually, I think it could fail in some situations, but never mind that now.
My suggestion is to change this mechanism:
by another that picks up the chain of the sensor, and checks for all the transformations in the sensor chain which of the transformations are also in the transforms to be optimized set, and mark those as influencing.
Here is the code suggestion:
# Sensor related parameters
# sensors_transform_key = generateKey(
# sensor["calibration_parent"], sensor["calibration_child"])
# params = opt.getParamsContainingPattern(sensors_transform_key)
# Issue #543: Create the list of transformations that influence this residual by analyzing the transformation chain.
params = []
transforms_list = list(transforms_set) # because I am not sure that I can use a set for some of the next steps
for transform_in_chain in sensor['chain']:
transform_key = generateKey(transform_in_chain["parent"], transform_in_chain["child"])
if transform_key in transforms_list:
params.extend(opt.getParamsContainingPattern(transform_key))
@JorgeFernandes-Git , I think this should do it. Can you try it out please?
Just got home now @miguelriemoliveira.
Dataset: https://drive.google.com/file/d/1PrGdk892LbXduyg-KvOf6bDnU08WtT70/view?usp=sharing
Run calibration:
roslaunch e5_dualrgb_arm2rgb_calibration calibrate.launch
rosrun atom_calibration calibrate -json $ATOM_DATASETS/e5_dualrgb_arm2rgb_v2/dataset.json -v -rv -si -vo
Thanks. I think it should work as suggested above. If not I can try it on my side with these tips.
Seems like it's working. I'm running the calibration now. :+1:
Great! Let me know how it goes.
The xacro file generated doesn't change this joint. It remains equal to the original
<joint name="moving_base_to_ur5e" type="fixed">
<origin xyz="-0.296 -0.1688 0.826" rpy="0.0 0.0 0.0"/>
<parent link="base_link_mb"/>
<child link="base_link"/>
</joint>
I'll push the new code now
The xacro file generated doesn't change this joint. It remains equal to the original
<joint name="moving_base_to_ur5e" type="fixed"> <origin xyz="-0.296 -0.1688 0.826" rpy="0.0 0.0 0.0"/> <parent link="base_link_mb"/> <child link="base_link"/> </joint>
You mean it optimized ok, but then the xacro is not properly updated? That sounds likely ... let me check...
Yes. The base of the manipulator moved during the calibration, but in the end came out like this. Tomorrow I'll look at this more in depth and try to find what cloud be missing.
Hi @JorgeFernandes-Git ,
I think this should solve it. We can pass a new argument (the third) which is the transforms list we created here
So I would call the saveResultsXacro function
like this
saveResultsXacro(dataset, selected_collection_key, list(transforms_set))
and the function should be
def saveResultsXacro(dataset, selected_collection_key, transforms_list):
# Cycle all sensors in calibration config, and for each replace the optimized transform in the original xacro
# Parse xacro description file
description_file, _, _ = uriReader(dataset["calibration_config"]["description_file"])
xml_robot = readXacroFile(description_file)
# for sensor_key in dataset["calibration_config"]["sensors"]:
# child = dataset["calibration_config"]["sensors"][sensor_key]["child_link"]
# parent = dataset["calibration_config"]["sensors"][sensor_key]["parent_link"]
# transform_key = generateKey(parent, child)
for transform_key in transforms_list:
trans = list(dataset["collections"][selected_collection_key]["transforms"][transform_key]["trans"])
quat = list(dataset["collections"][selected_collection_key]["transforms"][transform_key]["quat"])
found = False
for joint in xml_robot.joints:
if joint.parent == parent and joint.child == child:
found = True
# print("Found joint: " + str(joint.name))
# print("Replacing xyz = " + str(joint.origin.xyz) + " by " + str(trans))
joint.origin.xyz = trans
rpy = list(tf.transformations.euler_from_quaternion(quat, axes="sxyz"))
# print("Replacing rpy = " + str(joint.origin.rpy) + " by " + str(rpy))
joint.origin.rpy = rpy
break
if not found:
raise ValueError("Could not find transform " + str(transform_key) + " in " + description_file)
I think this should solve it.
@miguelriemoliveira the suggestions you made worked just fine.
I also add this part to the saveResultsXacro
function to separate the parent link and the child link:
Used in this if statement:
Maybe there's a better way of doing this. The transform_key
have the format parent_link-child_link
.
Calibration procedure: https://youtu.be/r8cS9na9IzU
I think the results are great! The calibration improved the expected joint:
Optimized:
<joint name="moving_base_to_ur5e" type="fixed">
<origin xyz="-0.29743767470529453 -0.1675476317529771 0.8300205665011378"
rpy="0.0006398218243443629 0.003321090521832441 0.004456368612619973"/>
<parent link="base_link_mb"/>
<child link="base_link"/>
</joint>
Vs. the original:
<joint name="moving_base_to_ur5e" type="fixed">
<origin xyz="-0.296 -0.1688 0.826" rpy="0.0 0.0 0.0"/>
<parent link="base_link_mb"/>
<child link="base_link"/>
</joint>
And the sensors:
Optimized:
<joint name="camera_mb_rgb_joint" type="fixed">
<origin xyz="-0.0022975103764571385 -0.019642525277753536 0.007599352868806068"
rpy="-4.0332030144216577e-05 0.000605245986036137 0.0021112608202621145"/>
<parent link="camera_mb_link"/>
<child link="camera_mb_rgb_frame"/>
</joint>
Original:
<joint name="camera_mb_rgb_joint" type="fixed">
<origin xyz="0.0 -0.045 0.0" rpy="0.0 0.0 0.0"/>
<parent link="camera_mb_link"/>
<child link="camera_mb_rgb_frame"/>
</joint>
Optimized:
<joint name="camera_rgb_joint" type="fixed">
<origin xyz="-0.0028312372348851898 -0.026825806057158758 -0.0048802445402278734"
rpy="0.00014353852667010308 -0.0037479914085974658 0.0054417806550962995"/>
<parent link="camera_link"/>
<child link="camera_rgb_frame"/>
</joint>
Original:
<joint name="camera_rgb_joint" type="fixed">
<origin xyz="0.0 -0.045 0.0" rpy="0.0 0.0 0.0"/>
<parent link="camera_link"/>
<child link="camera_rgb_frame"/>
</joint>
I notice that for the sensors, the y coordinate has a different value compared with the GT, but the same happened for the calibration without additional tfs. For now, it's working. I'll do more tests with better datasets.
Latest calibration results:
Collection | camera (px) | camera_mb (px) |
---|---|---|
Averages | 0.5313 | 0.4917 |
Good news.
@miguelriemoliveira the suggestions you made worked just fine.
Great. Happy to hear that.
I also add this part to the saveResultsXacro
function to separate the parent link and the child link:
Used in this if statement:
Maybe there's a better way of doing this. The
transform_key
have the formatparent_link-child_link
.
I am sure there is a better way. Even the part about passing a new parameter could be improved. Can you please create an issue specifically for this and assign it to me? For now let's leave it.
I notice that for the sensors, the y coordinate has a different value compared with the GT, but the same happened for the calibration without additional tfs. For now, it's working. I'll do more tests with better datasets.
Latest calibration results:
Collection | camera (px) | camera_mb (px) |
---|---|---|
Averages | 0.5313 | 0.4917 |
These results are really nice. I am a bit overwhelmed ... does this mean that ATOM is solving without a problem the manipulator base to robot calibration?
These results are really nice. I am a bit overwhelmed ... does this mean that ATOM is solving without a problem the manipulator base to robot calibration?
Yes. The joint of the manipulator base present those values:
Ground truth (measured from the cad file, inserted by hand):
<origin xyz="-0.296 -0.1688 0.826" rpy="0.0 0.0 0.0"/>
Calibration results:
<origin xyz="-0.29743767470529453 -0.1675476317529771 0.8300205665011378"
rpy="0.0006398218243443629 0.003321090521832441 0.004456368612619973"/>
Here is the procedure: https://youtu.be/r8cS9na9IzU
Congratulation @JorgeFernandes-Git!
These are great news!
To add noise initial guess to the additional transformations, I add this block:
@miguelriemoliveira @manuelgitgomes can you give it a look to check if it seems plausible? Look like it's working.
Hello @JorgeFernandes-Git!
It seems fine by me. However, because you copied the code bellow, we have very similar code in the same script. This tends to be a bad practice because if you want to change this function you would need to change it twice.
I suggest creating a new function named addNoiseToTF
or similar.
This functions would go something like:
def addNoiseToTF(calibration_parent, calibration_child):
tf_link = generateKey(calibration_parent, calibration_child, suffix='')
# Get original transformation
quat = dataset['collections'][selected_collection_key]['transforms'][tf_link]['quat']
translation = dataset['collections'][selected_collection_key]['transforms'][tf_link]['trans']
# print('Init vals')
# print(translation)
# print(quat)
euler_angles = tf.transformations.euler_from_quaternion(quat)
# Add noise to the 6 pose parameters
v = np.random.uniform(-1.0, 1.0, 3)
v = v / np.linalg.norm(v)
new_translation = translation + v * nig_trans
v = np.random.choice([-1.0, 1.0], 3) * nig_rot
new_angles = euler_angles + v
# Replace the original atomic transformations by the new noisy ones
new_quat = tf.transformations.quaternion_from_euler(new_angles[0], new_angles[1], new_angles[2])
dataset['collections'][selected_collection_key]['transforms'][tf_link]['quat'] = new_quat
dataset['collections'][selected_collection_key]['transforms'][tf_link]['trans'] = list(new_translation)
# Copy randomized transform to all collections
for collection_key, collection in dataset['collections'].items():
dataset['collections'][collection_key]['transforms'][tf_link]['quat'] = \
dataset['collections'][selected_collection_key]['transforms'][tf_link]['quat']
dataset['collections'][collection_key]['transforms'][tf_link]['trans'] = \
dataset['collections'][selected_collection_key]['transforms'][tf_link]['trans']
After creating this function, add it to both for cycles. Did you understand?
If you use the function in this comment, please verify its contents, as I have not tested it.
Hi @JorgeFernandes-Git ,
I agree with @manuelgitgomes 's suggestion.
Good tip, thanks.
I followed it and implemented like this: https://github.com/lardemua/atom/blob/85af14578a1d4fd0f0897ee30cb528ab48a3a3dc/atom_core/src/atom_core/dataset_io.py#L623-L689
So far I got this:
Ground truth:
<origin xyz="-0.296 -0.1688 0.826" rpy="0.0 0.0 0.0"/>
-nig 0.1 0.1
<origin xyz="-0.29453838375094427 -0.16750697765626305 0.8203502127898897"
rpy="0.0004717161498466556 -0.0007159745850205707 0.001072048560706622"/>
-nig 0.2 0.2
<origin xyz="-0.29466405085302455 -0.16821715652259384 0.83670476943398"
rpy="0.0004903019915529824 1.0848955144563524e-05 0.000890764102930763"/>
-nig 0.3 0.3
Falhou.
-nig 0.3 0.1
<origin xyz="-0.29440192287271894 -0.1669219038919996 0.7967067778931122"
rpy="0.0008937862142713434 -0.0014082129733468077 0.0010444534437917998"/>
-nig 1 0.1
<origin xyz="-0.293863191575118 -0.16673845759295564 1.116925543922768"
rpy="0.0006755916876319771 -0.0018749609954668896 0.0013069105926522425"/>
-nig 0.1 0.3
Falhou.
Oi @JorgeFernandes-Git ,
boas novidades. O melhor é usar a tal ferramenta do @manuelgitgomes chamada sensor_to_frame_evaluation para reportar os resultados.
Mas o essencial é que estamos a calibrar com precisão quando o nig é significativo, i.e., 0.1; 0.1. Isso são muito boas novidades.
Great news @JorgeFernandes-Git!
This means that ATOM's ability to calibrate this case is not jeopardized, which is great.
Regarding the results, I have previously noticed ATOM is more sensible to rotation changes, as can be seen in the ESWA paper.
I agree with @miguelriemoliveira that you should now evalute the results using the evaluation script.
Should I run it like this?
rosrun atom_evaluation sensor_to_frame_evaluation -train_json ~/datasets/e5_dualrgb_arm2agv/atom_calibration.json -test_json ~/datasets/e5_dualrgb_arm2agv/atom_calibration.json -ss camera_mb -tf odom`
@JorgeFernandes-Git Try to use -ss base_link_mb -tf base_link
and see if it works.
@manuelgitgomes That doesn't work because the -ss uses the sensors in the dataset. I'll try to adapt the script to do it.
The command I used, gave me strange values...
I adapt the script and run this:
-at base_link_mb_to_base_link -tf base_link_mb
But the results are this:
Collection # | Trans (mm) | Rot (deg) |
---|---|---|
Averages | 27.9648 | 0.1201 |
This is not showing the results we were talking about, right? We were talking about calculating the error between GT and calibrated link, which I think isn't this, or is it?
Here is the script: https://github.com/lardemua/atom/blob/da44b357bf0ef64701fb7d5b1ad5ef2a7ab52dfe/atom_evaluation/scripts/sensor_to_frame_evaluation#L74-L77
Hi @JorgeFernandes-Git ,
I think that's it. The translation error in millimeters and the rotation error in degrees. What were you expecting?
This last discussion should be in a different issue... better open one specifically for the evaluation topic.
@miguelriemoliveira I was expecting a script that outputs something like:
Links | X | Y | Z | R | P | Y |
---|---|---|---|---|---|---|
link1 | abs(Xgt - Xcal) | abs(Ygt - Ycal) | ... | ... | ... | ... |
link2 | ... | ... | ... | ... | ... | ... |
gt - ground truth cal - calibration
@miguelriemoliveira I was expecting a script that outputs something like:
Links X Y Z R P Y link1 abs(Xgt - Xcal) abs(Ygt - Ycal) ... ... ... ... link2 ... ... ... ... ... ... gt - ground truth cal - calibration
From this idea, I create this script: link to ground truth evaluation: https://github.com/lardemua/atom/blob/JorgeFernandes-Git/issue543/atom_evaluation/scripts/link_to_ground_truth_evaluation
In short, it compares the output URDF with the input URDF and computes the difference for each link calibrated.
Usage example:
rosrun atom_evaluation link_to_ground_truth_evaluation -train_json ~/datasets/e5_dualrgb_arm2agv/atom_calibration.json
Output with nig
0.1 0.1:
Link # | Xcal-Xgt (mm) | Ycal-Ygt (mm) | Zcal-Zgt (mm) | Roll_cal-Roll_gt (deg) | Pitch_call-Pitch_gt (deg) | Yaw_cal-Yaw_gt (deg) |
---|---|---|---|---|---|---|
base_link_mb_to_base_link | 1.82 | 1.40 | 31.06 | 0.00 | 0.00 | 0.00 |
camera | 0.95 | 25.38 | 0.06 | 0.00 | 0.00 | 0.00 |
camera_mb | 0.34 | 25.50 | 28.04 | 0.00 | 0.00 | 0.00 |
Output without nig
:
Link # | Xcal-Xgt (mm) | Ycal-Ygt (mm) | Zcal-Zgt (mm) | Roll_cal-Roll_gt (deg) | Pitch_call-Pitch_gt (deg) | Yaw_cal-Yaw_gt (deg) |
---|---|---|---|---|---|---|
base_link_mb_to_base_link | 1.94 | 2.32 | 0.25 | 0.00 | 0.00 | 0.00 |
camera | 0.84 | 23.79 | 0.04 | 0.00 | 0.00 | 0.00 |
camera_mb | 0.55 | 25.21 | 1.67 | 0.00 | 0.00 | 0.00 |
We can see a big difference in the z axis ...
Hi @JorgeFernandes-Git ,
I am a bit lost with this one. Didn't we come to the conclusion that @manuelgitgomes had done something like this? Are you upgrading his script or creating a new one?
@manuelgitgomes 's script is here:
In short, it compares the output URDF with the input URDF and computes the difference for each link calibrated.
Ah, now I get it. This is the difference to @manuelgitgomes 's script?
In that case I say that we should have a single link_to_ground_truth_evaluation where we can define the transform to evaluate (like in @manuelgitgomes 's script) or, if nothing is said, the script will evaluate all transformations that were estimated.
Also, we need the average translation and rotation errors besides the per axis values.
I think it a great work from you, but better to merge with @manuelgitgomes 's previous code ( or to create a script that covers all functionalities and then deprecate @manuelgitgomes 's script).
@manuelgitgomes , what is your opinion on this one?
The script @manuelgitgomes did is different. It runs through all collections and can compare from one chosen sensor/link to another frame through the calibration chain.
The one I did just compare the initial and final values on the estimated links. I did it because this comparison make more sense to me, and also to try to build one script from scratch. But we can ignore it, I'm not sure if it adds something.
Also, we need the average translation and rotation errors besides the per axis values.
By averages, you meant (x+y+z)/3 and (r+p+y)/3 @miguelriemoliveira?
Hello @miguelriemoliveira and @JorgeFernandes-Git!
@manuelgitgomes , what is your opinion on this one?
The script @manuelgitgomes did is different. It runs through all collections and can compare from one chosen sensor/link to another frame through the calibration chain.
Yes, I believe they should be merged, as they are redundant. My scripts basically finds the frame related to the sensor and compares the tf between this frame and an arbitrary one to the ground truth. This can be generalizes to be the tf between two general frames, as you did in your script.
By averages, you meant (x+y+z)/3 and (r+p+y)/3 @miguelriemoliveira?
The averages were the values on my original script. I believe this is calculated by multiplying the estimated tf by the inverted ground truth tf (or vice versa). If they were the same, an identity matrix would be result and the translation and rotation errors are null. If they are not the same, which is most often then not the case, a tf matrix is outputted from which we can derive the translation and rotation error.
This is in here:
Hi @manuelgitgomes.
I'll merge the scripts and maybe rename it to link_to_frame?
Do you have suggestions on how should be the output table? Should the script evaluate all collections?
Hi everyone.
How can we adapt ATOM to be capable of estimate a position of a link that doesn't belong directly to a sensor? What do I mean with that?
Watch this video first: https://www.youtube.com/watch?v=DbFU10SCOL8
Here, I was using the set initial estimate launch file generated by ATOM, to estimate the position of the manipulator arm w.r.t. the AGV. The sensor used was a camera on the end-effector of the manipulator (eye-on-hand). Here is the graph tree: This didn't work: https://youtu.be/JhHzo6qumo0
Then I add a second sensor to the AGV, a camera, and perform the calibration of the two cameras. Here is the system: Calibration graph: This work just fine as expected: https://youtu.be/AqQp4M0psj4?t=44
Now I want to be able to perform both of those calibrations simultaneously. Being so, the aim is to estimate 3 transformations:
With just 2 camera sensor.
Btw the robot's name is Zau 😄