lardemua / atom

Calibration tools for multi-sensor, multi-modal robotic systems
GNU General Public License v3.0
252 stars 26 forks source link

Implement ICP calibration from open3d #463

Closed miguelriemoliveira closed 2 years ago

miguelriemoliveira commented 2 years ago

Hi @manuelgitgomes ,

I just pushed the code we've been working on. If you need help to continue please let me know.

Its it atom_evaluaton/scripts/other_calibrations

manuelgitgomes commented 2 years ago

Hello! In relation to this, I believe it is working but it is really difficult due to the bad quality of the initial estimate. As far as I know, the dataset already has the transformations and changing the initial estimate will need the generation of a new dataset, which is not optimal in the current situation. I can generate a new one just for this purpose, that might be the option. Tomorrow I will try it.

miguelriemoliveira commented 2 years ago

I believe it is working but it is really difficult due to the bad quality of the initial estimate.

Thanks for the work.

As far as I know, the dataset already has the transformations and changing the initial estimate will need the generation of a new dataset, which is not optimal in the current situation. I can generate a new one just for this purpose, that might be the option. Tomorrow I will try it.

You are right, in order to use the initial estimate we would need to generate a new dataset. The problem is that if you would do that, then the comparison between ATOM and ICP would be unfair. If one approach was better than the other we would not be able to tell if this was due to the intrinsic qualities of the approach or the fact that the dataset that approach was using was somehow easier.

So in my view there are two options:

  1. Do a new dataset, but in that case we would need to run the calibration of ATOM in those datasets. Also, we would have to do a simulated and a real one, right?
  2. Since the option above may represent a heavy workload perhaps we should instead create a functionality in the icp script that would allow the users to drag and drop the point clouds so that they more or less align, before running the icp procedure. The resulting transformation would be the manual transformation (Tmanual) combined with the ipc transformation (Ticp), like this: Testimated = Ticp * Tmanual

I would vote for 2, but let me know your opinion @manuelgitgomes

miguelriemoliveira commented 2 years ago

Looked around ... 2 is here: http://www.open3d.org/docs/0.9.0/tutorial/Advanced/interactive_visualization.html#manual-registration

manuelgitgomes commented 2 years ago

Looked around ... 2 is here: http://www.open3d.org/docs/0.9.0/tutorial/Advanced/interactive_visualization.html#manual-registration

Ok, it seems the better option! That wouldn't also be "unfair" to atom? Nevertheless, it seems easy to implement

miguelriemoliveira commented 2 years ago

Yes it would, because ICP would get and initial manual help. But, not to be seen as a conspiracy theory crackpot : ) , ATOM is always treated unfairly in these comparisons because it calibrates all sensors altogether, while the other approaches calibrate the single pair of sensors that will then be evaluated in order to see which methodology performs best.

miguelriemoliveira commented 2 years ago

Lets call it ICP with manual alignment ...

manuelgitgomes commented 2 years ago

I think something else might be wrong here. The first picture is the ICP without any calibration. It's visible that the pattern is on opposite sides of the room. image

But then, when running the dataset_review, this is the inital estimate, which is visibly different than this. image

I will try and fix this first!

miguelriemoliveira commented 2 years ago

Sounds like a bug. Our idea from yesterday of using the transformation in the dataset perhaps will be enough.

manuelgitgomes commented 2 years ago

Bug solved, the transform was applied to the source instead of the target: image

Nevertheless, the results are not impressive. We can do the manual alignment and present both in the paper. image

miguelriemoliveira commented 2 years ago

Great, Lets discuss this afternoon.

manuelgitgomes commented 2 years ago

Hello @miguelriemoliveira and @danifpdra! ICP is mostly implemented with some user experience flaws. Currently, it's saving 4 JSON files:

First results of the initial_best and aligned_best, respectively:

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
            #                       RMS                   Avg Error                 X Error                  Y Error           X Standard Deviation     Y Standard Deviation   
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
            0                    0.454621                  0.4458                   0.3133                   0.2060                   0.0480                   0.1369          
            1                    0.394177                  0.3655                   0.2040                   0.2120                   0.0304                   0.1501          
            2                    0.331603                  0.3197                   0.2221                   0.1456                   0.0257                   0.0892          
            3                    0.363690                  0.3479                   0.2753                   0.1342                   0.0537                   0.0920          
            4                    0.337166                  0.3148                   0.2329                   0.1364                   0.0499                   0.0975          
            5                    0.325706                  0.2839                   0.1656                   0.1657                   0.0799                   0.1323          
            6                    0.407685                  0.3760                   0.2826                   0.1441                   0.1025                   0.1106          
            7                    0.244207                  0.2177                   0.1020                   0.1405                   0.0265                   0.1181          
            8                    0.410387                  0.4028                   0.3435                   0.1092                   0.0727                   0.0992          
            9                    0.354410                  0.3446                   0.3149                   0.0886                   0.0906                   0.0597          
           10                    0.359381                  0.3422                   0.2787                   0.1355                   0.0616                   0.0932          
           11                    0.334751                  0.3044                   0.2338                   0.1156                   0.0836                   0.0855          
           12                    0.348726                  0.3423                   0.2986                   0.0945                   0.0814                   0.0864          
           13                    0.353193                  0.3451                   0.2941                   0.1085                   0.0848                   0.0897          
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
           All                   0.207025                  0.5084                   0.2527                   0.1365                   0.0928                   0.1094          
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
            #                       RMS                   Avg Error                 X Error                  Y Error           X Standard Deviation     Y Standard Deviation   
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
            0                    0.032536                  0.0253                   0.0114                   0.0106                   0.0086                   0.0118          
            1                    0.027255                  0.0237                   0.0100                   0.0099                   0.0074                   0.0090          
            2                    0.112233                  0.0702                   0.0113                   0.0373                   0.0064                   0.0496          
            3                    0.114655                  0.0706                   0.0149                   0.0345                   0.0169                   0.0450          
            4                    0.110110                  0.0686                   0.0175                   0.0331                   0.0197                   0.0429          
            5                    0.103411                  0.0652                   0.0236                   0.0272                   0.0294                   0.0346          
            6                    0.271300                  0.2045                   0.0759                   0.0705                   0.0699                   0.0606          
            7                    0.031828                  0.0256                   0.0085                   0.0108                   0.0061                   0.0111          
            8                    0.048204                  0.0318                   0.0077                   0.0138                   0.0062                   0.0179          
            9                    0.159351                  0.1019                   0.0090                   0.0499                   0.0137                   0.0600          
           10                    0.193460                  0.1350                   0.0386                   0.0610                   0.0412                   0.0646          
           11                    0.174536                  0.1205                   0.0445                   0.0516                   0.0492                   0.0536          
           12                    0.054560                  0.0362                   0.0063                   0.0166                   0.0056                   0.0202          
           13                    0.056008                  0.0380                   0.0075                   0.0175                   0.0062                   0.0200          
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
           All                   0.075628                  0.1159                   0.0213                   0.0331                   0.0349                   0.0463          
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I have some questions:

With this said, most flaws are in the pick_points function: https://github.com/lardemua/atom/blob/db8d143cfbba3fd42337918c7cb3cb3cef8e1bff/atom_evaluation/scripts/other_calibrations/icp_calibration.py#L45-L61 This function calls a class, which I need to explore more thoroughly. The function currently has too large of a marker for the picked points, inconsistent zoom which can lead to misclicks and bad alignment, and inconsistent colour, which can be distracting. This script also breaks when the user picks less than 3 points or a different number of points between each pointcloud. A verbose option needs to be implemented as well.

I believe this is it, can you try it and give me some feedback? Thank you!

miguelriemoliveira commented 2 years ago

Hi @manuelgitgomes ,

great work.

I think it makes sense to save these 4 files.

469 is in the backlog for now., so lets put the files in the root folder for now.

About the evaluations: we should always think in the rmse. The others are not realy important. About the numbers:. What do we get using ATOM? Better or worse?

miguelriemoliveira commented 2 years ago

This function calls a class, which I need to explore more thoroughly. The function currently has too large of a marker for the picked points, inconsistent zoom which can lead to misclicks and bad alignment, and inconsistent colour, which can be distracting. This script also breaks when the user picks less than 3 points or a different number of points between each pointcloud.

About the open3d questions, I am sure @tiagomfmadeira can help. He worked on this already. @tiagomfmadeira can you give some tips to @manuelgitgomes ?

manuelgitgomes commented 2 years ago

About the numbers:. What do we get using ATOM? Better or worse?

ATOM, on this dataset and this pair of sensors, has this results:

All                   0.077504                  0.1184                   0.0226                   0.0328                   0.0395                   0.0461

Which is slightly worse than the alligned_best, which is the best one of them all! So, I think we are doing well!

miguelriemoliveira commented 2 years ago

Which is slightly worse than the alligned_best, which is the best one of them all! So, I think we are doing well!

Great. Notice our argument usually is not to say ATOM is more accurate in comparison to other methods (in fact most often it is not). The argument we make is that ATOM is as accurate as other methods, but that it is so when calibrating complete systems instead of only the sensor pair which is being evaluated.

Moreover, in this case we gave ICP a big help with the enhanced manual first guess.

I think this issue is solved. I would put the results in the table and proceed. If you want to spend a couple of days more just to finish these loose ends it is also ok. Up to you.

manuelgitgomes commented 2 years ago

If you want to spend a couple of days more just to finish these loose ends it is also ok. Up to you.

If you need anything else from me for this paper, I can tie these ends later. If not, I will try to do it in the next couple of days!

miguelriemoliveira commented 2 years ago

Well, now that we are at it I would say we can use ICP to also have depth to lidar results.

For the depth we save range images, but these can be converted to points clouds:

https://github.com/lardemua/atom/blob/db8d143cfbba3fd42337918c7cb3cb3cef8e1bff/atom_evaluation/scripts/depth_to_rgb_evaluation#L60-L91

Then it's just a matter of running a very similar script to ICP calibration.

That would be nice because it would give other approaches also in the depth to lidar table.

What do you say? @danifpdra ? and you?

manuelgitgomes commented 2 years ago

Seems like a cool idea!

manuelgitgomes commented 2 years ago

Hello! I have added functionalities to the script:

I also have a doubt. The average transform of the aligned ICP has a better result than the transform of the aligned ICP with the least amount of rmse. Does this make sense? Is this a bug?

miguelriemoliveira commented 2 years ago

I also have a doubt. The average transform of the aligned ICP has a better result than the transform of the aligned ICP with the least amount of rmse. Does this make sense? Is this a bug?

It makes sense. The best is the best only for a given collection, and the evaluation is done over all collections.

danifpdra commented 2 years ago

Hi @manuelgitgomes ,

Are the results above final? Can I put them on the paper? I am a bit lost, sorry

manuelgitgomes commented 2 years ago

Hello @danifpdra! The results are being first written at #439, but when they are all taken I will write them on the spreadsheet!

danifpdra commented 2 years ago

Okay, thanks :-)

manuelgitgomes commented 2 years ago

Hello! The ICP implementation pipeline is done. However. there are some logical flaws I might need your help solving. When using the initial estimate, everything is aligned correctly, which is great!

image

When picking points for the manual alignment in the depth sensor, the problem arises. Because only the limits are displayed, the user gets decontextualized, not knowing which corner is which, making it very hard to pick the same points between pointclouds.

image

I have also tried to use all the pixels in the image, but the open3d window is not displaying them, but it is calculating them. I thought it might be due to a large number of points, so I only counted 1 point in every 100. The result was the same. You can test this in the next line of code, where the variable full_image makes the ICP use the full image and the pixel_interval is the interval between pixels that are converted to points in the pointcloud.

https://github.com/lardemua/atom/blob/9c38093feda85bd6d6566c4c8de2624a679330d5/atom_evaluation/scripts/other_calibrations/icp_calibration.py#L156

I believe we have two options:

As we are short on time, I think the best idea is to use the first option for results for this paper, and later try and implement option 2. What do you think?

miguelriemoliveira commented 2 years ago

Hi @manuelgitgomes ,

that's strange. Let me look into it, although I have little experience with open3d...

miguelriemoliveira commented 2 years ago

Hi @manuelgitgomes ,

I am lost. Should we not have an icp calibration from lidar to lidar, and another icp calibration script from depth to lidar?

This script handles both modalities? How?

manuelgitgomes commented 2 years ago

Hello! Yes, the script detects the modality of the sensor and, depending on the modality, it loads the pointcloud. After the pointcloud is loaded, the process is the same.

miguelriemoliveira commented 2 years ago

Also, can you post how you run the code?

miguelriemoliveira commented 2 years ago

Hello! Yes, the script detects the modality of the sensor and, depending on the modality, it loads the pointcloud. After the pointcloud is loaded, the process is the same.

Where?

manuelgitgomes commented 2 years ago

Also, can you post how you run the code?

rosrun atom_evaluation icp_calibration.py -json $ATOM_DATASETS/larcc/larcc_sim/train_dataset/dataset_corrected.json -ss depth_camera_1 -st lidar_3 -nig 0.1 0.1 -si

manuelgitgomes commented 2 years ago

Where?

https://github.com/lardemua/atom/blob/9c38093feda85bd6d6566c4c8de2624a679330d5/atom_evaluation/scripts/other_calibrations/icp_calibration.py#L147-L162

manuelgitgomes commented 2 years ago

If you need some help we can meet!

miguelriemoliveira commented 2 years ago

OK, lets meet. I will turn on zoom.

manuelgitgomes commented 2 years ago

Hello! I corrected the bug and added some suggestions:

I believe this is it. Tommorow I will take some results!

danifpdra commented 2 years ago

Hi @manuelgitgomes and @miguelriemoliveira

Is the ICP lidar-lidar evaluation that is on google sheets complete? If yes, which of them should go on the paper? All of them? Only the best (ICP aligned average)?

Thanks @manuelgitgomes for these results

manuelgitgomes commented 2 years ago

Hello!

Is the ICP lidar-lidar evaluation that is on google sheets complete?

It is!

If yes, which of them should go on the paper?

I believe the initial idea was to place all of them. If you do not have space, I would still suggest placing the initial average and the aligned average. This is to compare how ICP works with the same initial estimate as ATOM, which usually is worse, and how it works with a better initial estimate. But let's see what @miguelriemoliveira says.

danifpdra commented 2 years ago

Ok, i can put them all, no problem. Can you just explain briefly the difference between all of them so I can explain the tables?

manuelgitgomes commented 2 years ago

The difference between initial and aligned:

The difference between best and average:

danifpdra commented 2 years ago

So why is the average better than the best?

manuelgitgomes commented 2 years ago

So why is the average better than the best?

I also had that doubt but professor answered me:

It makes sense. The best is the best only for a given collection, and the evaluation is done over all collections.

danifpdra commented 2 years ago

image

I think it looks good, what do you think?

miguelriemoliveira commented 2 years ago

Looks great. Congratullations.

Agree with all you said.

danifpdra commented 2 years ago

Hi @manuelgitgomes ...

I'm sure I'm doing something wrong.

First: are the results for the manual alignment suppose to be this bad? for lidar-lidar pairs they were not....

average aligned lidar 1 depth 1

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
            #                       RMS                   Avg Error                 X Error                  Y Error           X Standard Deviation     Y Standard Deviation   
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
            0                    46.650681                 40.3259                  21.1715                  31.4176                  9.9829                   25.3246         
            1                    62.072666                 52.1176                  31.5023                  39.6737                  17.8420                  31.1172         
            2                    37.831516                 30.6461                  20.0955                  22.0418                  15.6114                  17.2580         
            3                    33.247055                 26.1691                  15.5622                  19.5826                  14.0037                  16.8405         
            4                    37.478825                 29.6903                  17.7347                  23.1683                  14.6402                  18.4130         
            5                    42.734108                 35.8412                  30.1838                  14.5819                  24.7313                  9.5326          
            6                    29.625120                 27.5058                  21.7068                  12.9464                  10.5136                  11.3277         
            7                    83.569252                 70.3403                  54.4337                  41.1845                  41.8374                  23.9638         
            8                    66.548004                 54.5569                  34.8763                  41.5994                  25.5576                  28.7850         
            9                    39.427143                 33.6806                  18.6765                  27.7925                  10.5179                  17.9621         
           10                    32.999649                 28.8262                  19.4487                  16.7632                  14.3925                  14.9191         
           11                    27.770437                 22.9713                  16.4569                  14.0008                  13.5200                  11.0252         
           12                    72.216515                 61.2602                  56.2002                  23.4019                  35.8418                  14.9824         
           13                    74.634613                 64.5749                  59.2445                  24.9836                  35.0420                  14.4321         
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
           All                   38.244579                 57.2703                  30.5442                  26.7261                  27.6817                  22.6227         
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

average aligned lidar1 depth 1

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
            #                       RMS                   Avg Error                 X Error                  Y Error           X Standard Deviation     Y Standard Deviation   
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
            0                    66.420242                 55.4342                  44.5758                  24.6165                  40.4263                  13.5789         
            1                    70.164818                 57.2973                  48.9345                  23.9170                  41.5592                  15.1435         
            2                    67.832092                 56.1520                  46.2361                  28.9085                  37.7567                  14.2181         
            3                    62.315342                 50.7005                  43.2921                  23.7180                  36.3365                  11.2300         
            4                    66.012985                 55.2926                  45.7017                  28.3100                  36.2793                  12.3054         
            5                   100.386276                 85.2989                  75.9877                  32.6839                  54.3552                  16.7495         
            6                    92.612206                 66.9629                  64.2030                  18.1151                  61.6799                  17.9560         
            7                   108.082741                 95.5917                  71.9311                  48.8220                  47.7870                  42.9023         
            8                    80.169487                 67.6659                  57.2510                  30.6470                  41.6205                  21.8623         
            9                    84.183937                 75.1669                  72.8820                  15.9113                  37.9548                  9.0226          
           10                    78.720192                 61.1578                  59.7963                  11.0126                  49.2200                  8.7967          
           11                    80.491455                 66.1291                  62.0063                  19.3868                  46.6577                  9.0169          
           12                   117.652786                109.3733                  83.5256                  56.0331                  45.7023                  40.4628         
           13                   118.420540                109.5640                  81.9077                  59.0759                  44.8452                  42.5851         
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
           All                   61.160652                 91.2366                  60.1551                  31.0815                  46.2892                  27.4566         
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I used the command

rosrun atom_evaluation icp_calibration.py -j $ATOM_DATASETS/larcc_sim/train_dataset/dataset_corrected.json -ss depth_camera_1 -st lidar_1 -nig 0.1 0.1 -seed 2 -ma

But I wasn't quite sure which points was I suppose to choose and I choose corners of the chessboard. Maybe that's wrong?

Aditionally, some pairs give me this: (example of non aligned average depth 1 - lidar 2 for simulation)

Starting evalutation...

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
            #                       RMS                   Avg Error                 X Error                  Y Error           X Standard Deviation     Y Standard Deviation   
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
No LiDAR point mapped into the image for collection 0
No LiDAR point mapped into the image for collection 1
No LiDAR point mapped into the image for collection 2
No LiDAR point mapped into the image for collection 3
No LiDAR point mapped into the image for collection 4
No LiDAR point mapped into the image for collection 5
No LiDAR point mapped into the image for collection 6
No LiDAR point mapped into the image for collection 7
No LiDAR point mapped into the image for collection 8
No LiDAR point mapped into the image for collection 9
No LiDAR point mapped into the image for collection 10
No LiDAR point mapped into the image for collection 11
No LiDAR point mapped into the image for collection 12
No LiDAR point mapped into the image for collection 13
/home/daniela/catkin_ws/src/calibration/atom/atom_evaluation/scripts/lidar_to_depth_evaluation:275: RuntimeWarning: invalid value encountered in true_divide
  avg_error = np.sum(np.abs(delta_total)) / total_pts
/home/daniela/catkin_ws/src/calibration/atom/atom_evaluation/scripts/lidar_to_depth_evaluation:276: RuntimeWarning: Mean of empty slice.
  rms = np.sqrt((delta_total ** 2).mean())
/usr/lib/python3/dist-packages/numpy/core/_methods.py:161: RuntimeWarning: invalid value encountered in true_divide
manuelgitgomes commented 2 years ago

But I wasn't quite sure which points was I suppose to choose and I choose corners of the chessboard. Maybe that's wrong?

Hello! No, that is corrected. But you need to pick them at the same order in both pointclouds.

Aditionally, some pairs give me this: (example of non aligned average depth 1 - lidar 2 for simulation)

Which command did you ran here?

danifpdra commented 2 years ago

But I wasn't quite sure which points was I suppose to choose and I choose corners of the chessboard. Maybe that's wrong?

Hello! No, that is corrected. But you need to pick them at the same order in both pointclouds.

I did that but sometimes I'm not sure because half the chessboard is missing....

Aditionally, some pairs give me this: (example of non aligned average depth 1 - lidar 2 for simulation)

Which command did you ran here?

This

rosrun atom_evaluation icp_calibration.py -j $ATOM_DATASETS/larcc_sim/train_dataset/dataset_corrected.json -ss depth_camera_1 -st lidar_2 -nig 0.1 0.1 -seed 2

and then this

rosrun atom_evaluation lidar_to_depth_evaluation -train_json $ATOM_DATASETS/larcc_sim/train_dataset/ICPCalibration_average_depth_camera_1_lidar_2.json -test_json $ATOM_DATASETS/larcc_real/test_dataset/dataset.json -ld lidar_2 -cs depth_camera_1

danifpdra commented 2 years ago

Btw the results of lidar-depth evaluation are in pixels, so those results are really bad...

manuelgitgomes commented 2 years ago

I did that but sometimes I'm not sure because half the chessboard is missing....

Yes, it's tricky to do. Usually when it's hard to do, just click on q without selecting any points. If you do that, the current collection is not counted.

rosrun atom_evaluation lidar_to_depth_evaluation -train_json $ATOM_DATASETS/larcc_sim/train_dataset/ICPCalibration_average_depth_camera_1_lidar_2.json -test_json $ATOM_DATASETS/larcc_real/test_dataset/dataset.json -ld lidar_2 -cs depth_camera_1

Interesting, do these results apply for the initial estimate as well? Also, try the dataset_corrected like this:

rosrun atom_evaluation lidar_to_depth_evaluation -train_json $ATOM_DATASETS/larcc_sim/train_dataset/ICPCalibration_average_depth_camera_1_lidar_2.json -test_json $ATOM_DATASETS/larcc_real/test_dataset/dataset_corrected.json -ld lidar_2 -cs depth_camera_1
danifpdra commented 2 years ago

I did that but sometimes I'm not sure because half the chessboard is missing....

Yes, it's tricky to do. Usually when it's hard to do, just click on q without selecting any points. If you do that, the current collection is not counted.

Yes, I did that. But still, these results were not suppose to happen, right?

rosrun atom_evaluation lidar_to_depth_evaluation -train_json $ATOM_DATASETS/larcc_sim/train_dataset/ICPCalibration_average_depth_camera_1_lidar_2.json -test_json $ATOM_DATASETS/larcc_real/test_dataset/dataset.json -ld lidar_2 -cs depth_camera_1

Interesting, do these results apply for the initial estimate as well? Also, try the dataset_corrected like this:

rosrun atom_evaluation lidar_to_depth_evaluation -train_json $ATOM_DATASETS/larcc_sim/train_dataset/ICPCalibration_average_depth_camera_1_lidar_2.json -test_json $ATOM_DATASETS/larcc_real/test_dataset/dataset_corrected.json -ld lidar_2 -cs depth_camera_1

Yes, I used the corrected. I copied from the wrong place. It happens for aligned and not aligned. best and average

manuelgitgomes commented 2 years ago

Weird. Let me try it. I will get back to you in a moment