lardemua / atom

Calibration tools for multi-sensor, multi-modal robotic systems
GNU General Public License v3.0
245 stars 28 forks source link

Generate camera-to-camera error metrics script #221

Closed aaguiar96 closed 3 years ago

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira and @eupedrosa

This also has to done right? Should I reuse some code? Which script?

eupedrosa commented 3 years ago

Why? Because of the homography?

The homography uses the detected corners from both images while "my" approach uses the points from the pattern object that have no detection error because they are known.

Can you push the code, please?

Yes, I am gonna push the code. I also corrected the drawing of the projected corners. Using Homography, these corners do not have distortion, but the image has.

miguelriemoliveira commented 3 years ago

Hi,

A lot of work you guys are doing ... great!

we use distortion, yes. The idea is to undistort, then project, the distort for the other camera.

The writting of a latex is important because we need to explain how we compute the results. And, in my opinion, the current homography looks quite nice on a paper .

You could write it in Andre's overleaf ... then we can compare ...

On Wed, Aug 26, 2020, 12:20 Eurico F. Pedrosa notifications@github.com wrote:

Why? Because of the homography?

The homography uses the detected corners from both images while "my" approach uses the points from the pattern object that have no detection error because they are known.

Can you push the code, please?

Yes, I am gonna push the code. I also corrected the drawing of the projected corners. Using Homography, these corners do not have distortion, but the image has.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/221#issuecomment-680818490, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVW3TB5362C6FKITNYLSCTVWZANCNFSM4PLNBGOA .

aaguiar96 commented 3 years ago

So, I tested a camera-camera calibration with @eupedrosa's dataset with only 10 collections (I cannot do it with all of them, the calibration takes too long, do you know why @miguelriemoliveira?). These are the results:

Final errors:
Sensor left_camera 2.2872869808447853
Sensor right_camera 1.9953841216533612
Saving the json output file to /home/andre-criis/Documents/saved_datasets/eurico_dataset/atom_calibration.json, please wait, it could take a while ...
Completed.
[INFO] [1598450615.322321]: Reading description file /home/andre-criis/Source/catkin_ws/src/agrob/agrob_description/urdf/agrob.urdf.xacro...
Parsing description file /home/andre-criis/Source/catkin_ws/src/agrob/agrob_description/urdf/agrob.urdf.xacro

Then, I run the evaluation with the same train and test json (the one obtained) and I got:

With homography:

-------------------------------------------------------------------------------------------------------------
  #           X Error                  Y Error           X Standard Deviation     Y Standard Deviation
-------------------------------------------------------------------------------------------------------------
  0           0.7664                   0.5475                   0.7264                   0.6303
  1           0.7440                   0.7578                   0.7827                   0.7181
  2           0.5640                   0.6720                   0.6264                   0.6594
  3           2.7848                   2.4121                   1.9428                   0.9293
  4           5.4983                   1.7507                   0.8045                   1.9261
  5           1.6737                   1.8730                   1.7484                   1.3980
  6           1.3969                   1.0985                   1.5537                   1.0886
  7           0.7678                   1.1630                   0.5893                   1.0885
  8           1.8116                   0.8544                   1.9607                   0.8361
  9           1.3227                   0.5923                   1.4459                   0.7257
-------------------------------------------------------------------------------------------------------------
 All          1.5903                   1.1299                   2.1994                   1.4707
-------------------------------------------------------------------------------------------------------------

@eupedrosa's approach:

-------------------------------------------------------------------------------------------------------------
  #           X Error                  Y Error           X Standard Deviation     Y Standard Deviation
-------------------------------------------------------------------------------------------------------------
  0           1.1613                   1.1485                   1.2275                   1.3396
  1           1.3264                   1.2749                   1.4798                   1.3745
  2           1.0705                   1.3813                   1.2472                   1.4753
  3           2.9823                   2.4401                   2.3504                   1.3104
  4           5.3215                   1.7866                   1.8328                   2.0481
  5           2.2011                   1.7217                   2.4403                   1.5070
  6           1.9319                   1.1132                   2.1678                   1.2601
  7           1.9462                   2.1626                   2.2917                   2.6899
  8           3.2532                   2.0928                   3.7635                   2.6447
  9           2.7888                   1.4836                   3.4372                   1.8130
-------------------------------------------------------------------------------------------------------------
 All          2.2960                   1.6425                   2.9434                   2.1053
-------------------------------------------------------------------------------------------------------------

Here, homography have lower error... Also, the errors printed by the calibration are sqrt(x ** 2 + y ** 2) to all collections? Just to be able to compare what's the approach that is more similar to the calibration output.

eupedrosa commented 3 years ago

I cannot do it with all of them, the calibration takes too long, do you know why @miguelriemoliveira?

Also, the errors printed by the calibration are sqrt(x 2 + y 2) to all collections?

It is the error mean for all collections per sensor, i.e. mean(sqrt(x**2 + y**2)). I think we could also use the Root Mean Squared Error, which is sqrt(mean(x**2 + y**2))

aaguiar96 commented 3 years ago

Ok thanks @eupedrosa

Anyway, the evaluation seems to be working. Your version seems closer to the calibration result than homography.

So,

eupedrosa commented 3 years ago

Both methods are working, but the homography has a nice math behind it. We could present both, but they are a little redundant.

But more important than this is which metrics to use:

  1. The mean of pixel difference, per component
  2. The mean of the pixel error, i.e. euclidean distance.
  3. The RMSE
  4. The translation and rotation error - The calculated pattern pose in source_sensor should be equal to the calculated pattern pose in target_sensor, in the base frame.

@miguelriemoliveira, do you have more ideias, more suggestions?

miguelriemoliveira commented 3 years ago

Hi guys,

you make me feel bad for being on my last vacation days .. thanks a lot.

I did not get what was going wrong ... I guess now both are giving similar results so that's very good.

About going slowly, in my machine it is quite fast. Calibrated the 55 collections in 3/4 minutes. Try @eupedrosa 's suggestions. Another suggestion is to run without visualization. Take the -v -rv -si -vo flags away.

I agree with @eupedrosa the homography is nice to show off in a paper (would have to be updated with the undistort stuff). But we can go for the other if we have a similar mathematical formulation ...

About metrics, I agree with all and don't have any other ideas for now.

I should work a bit on this tonight ... what should I work on?

BTW, did you see the updates I did in the README? We have a new white background image like @eupedrosa wanted.

aaguiar96 commented 3 years ago

I should work a bit on this tonight ... what should I work on?

Hi @miguelriemoliveira

If you want, you can take a look at this. If you think it's working, maybe tomorrow I can start generating the results for the paper.

BTW, did you see the updates I did in the README? We have a new white background image like @eupedrosa wanted.

I saw it. Really nice!

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira and @eupedrosa

I finished all the paper sections except the abstract, results, and conclusions. Soon, I'll start to work on the results. Here (camera to camera), what should I use? Homography, @eupedrosa's approach, or both?

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

great. @eupedrosa , you want to split the sections for review of the paper? Or should we revise first one and then the other?

About the results: For sure we should use just one, since they are very similar. We can use Eurico's approach, no problem by me, but for that he must first write down the methodology. The homography approach needs only be updated with the undistort/distort part. For me, whatever @eupedrosa says is ok.

We can talk this afternoon if you want to ... I have a meeting at 14h30 which should end around 15h15 ...

aaguiar96 commented 3 years ago

We can talk this afternoon if you want to ... I have a meeting at 14h30 which should end around 15h15 ...

Fine by me, 15h30 then?

miguelriemoliveira commented 3 years ago

OK.

On Tue, 1 Sep 2020 at 10:40, André Aguiar notifications@github.com wrote:

We can talk this afternoon if you want to ... I have a meeting at 14h30 which should end around 15h15 ...

Fine by me, 15h30 then?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/221#issuecomment-684673150, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVQKKXW6W32OB6P5ZOTSDS6SDANCNFSM4PLNBGOA .

eupedrosa commented 3 years ago

@eupedrosa , you want to split the sections for review of the paper? Or should we revise first one and then the other?

I agree that we should meet to talk about this. 15h30 then?

aaguiar96 commented 3 years ago

Hi @eupedrosa

Will you had the e_R and e_t metrics to the script or do you want me to? I think it is the only thing that is missing.

eupedrosa commented 3 years ago

You should do it.. I will not be able to do it until next Wednesday, sorry.

aaguiar96 commented 3 years ago

Ok, no problem! I'll do it.

Just to make sure, in this, we have the RMS that is computed with the homography also right?

aaguiar96 commented 3 years ago

Hi @eupedrosa

The angle(delta_R) from equation 17 you eye-hand paper returns an error per component (roll, pitch, yaw)? I implemented and that is what is returning... Do I have to make the average of the three components?

eupedrosa commented 3 years ago

The angle(delta_R) from equation 17 you eye-hand paper returns an error per component (roll, pitch, yaw)?

No, it its the difference of the the Rodrigues angle-axis. The error is mean root squared Here is an example where you have AX - ZB https://github.com/lardemua/atom/blob/34f2f38593849603c1287f8c23c38d4b99a515c9/atom_calibration/scripts/deprecated/view_errors.py#L25-L67

aaguiar96 commented 3 years ago

Hi @eupedrosa

That code uses a library that I think does not exist anymore.

This one:

from OptimizationUtils.tf import Transform

Can you find it?

eupedrosa commented 3 years ago

Hi @aaguiar96. I was able to code a little and I added the calculation of the translation and rotation error. The error output is milimeters for the translation and degrees fo the rotation. I hope it helps.

aaguiar96 commented 3 years ago

Thanks @eupedrosa

I tested with the dataset I recorded. Using the same train and test datasets:

  #    X err     Y err     X std     Y std     T err     R err
----------------------------------------------------------------
  0    0.5573    0.5802    0.7395    0.5779    5.8060    2.4872
  1    0.5925    0.4882    0.7607    0.5777    1.0816    1.8670
  2    0.6453    0.5082    0.8175    0.5568    1.6495    1.4338
  3    3.4849    1.2015    4.3006    1.3567   30.2836    2.7692
  4    3.5522    1.2051    4.3778    1.3938   29.8327    3.2425
  5    2.6586    1.0849    2.4333    1.1850   15.6820    3.7736
  6    3.1718    1.2498    3.6824    1.3888   21.8220    4.7583
  7    2.3569    0.8860    2.8862    1.1384    9.0907    8.9631
  8    1.8404    0.7085    2.4620    0.8675   22.1179    7.7659
  9    2.0966    0.8144    2.5706    1.0223   23.2567    2.0682
 10    1.8982    0.7959    2.3122    1.0091   19.7796    0.5381
 11    1.6010    0.6267    1.8675    0.4729   15.1498    2.8255
 12    1.6175    0.6775    1.8753    0.4575   16.8972    2.9561
 13    1.5912    0.6595    1.6867    0.4190   15.3403    2.1786
 14    1.6464    0.5514    2.0299    0.6769   18.5765    1.5091
 15    1.8311    0.5400    2.1882    0.6262   14.2343    2.3354
 16    1.6146    0.4460    1.9866    0.4741    9.7071    1.3807
 17    1.5817    0.9195    1.2225    0.3862    6.0639    2.1541
 18    1.8025    0.8366    1.5587    0.4814   15.5098    3.2164
 19    1.1211    0.7837    1.4146    0.5971   23.6852    2.4849
 20    1.2099    0.7546    1.5051    0.5750   23.2200    2.3668
 21    1.2167    0.7775    1.5239    0.5515   24.7140    2.4605
 22    1.5420    0.8334    1.5588    0.6014   18.4136    8.5707
 23    1.8581    1.0826    1.4487    1.2411   20.8201    1.1845
 24    2.2912    2.2199    1.3372    0.7895   18.8311    1.6796
 25    2.4778    1.2026    1.2554    1.1908   19.1234    1.7761
 27    1.4838    1.0395    1.0900    1.2036    8.5346    2.4153
 28    2.2628    0.5914    1.1047    0.8124    8.3501    2.5921
 30    2.3495    1.2194    1.8986    1.3411   26.7422    2.1912
 31    1.9258    1.6323    1.9982    1.4843   35.9286    2.4256
 33    2.7898    2.9076    3.2486    2.2134   50.1551    4.2149
 34    3.5798    3.5132    1.6398    1.4064    6.3119    2.0874
 35    1.0777    1.2012    1.3179    0.7296    2.3551    1.2435
 36    1.2567    0.6746    1.3054    0.6506    3.6375    1.1352
 37    0.8869    0.6117    1.2378    0.6962    4.3657    2.0674
 38    1.2281    0.6442    1.5280    0.7579    6.3244    1.4903
 39    1.3176    0.7364    1.5252    0.7884    5.5127    1.5340
 40    1.0204    1.1395    1.1560    0.5278    4.7035    1.4547
 41    0.6358    0.4695    0.7911    0.5764    2.7305    0.8241
 42    1.8799    1.2782    2.0529    1.1145    7.4452    1.5748
 43    1.4190    0.5670    1.4233    0.5752    6.4901    1.2654
 44    1.5136    0.5342    1.5082    0.6377    6.6169    1.3248
 45    0.6987    0.6872    1.0341    0.6785   25.3840    1.8267
 46    0.7328    0.5736    1.0284    0.6230   23.8850    1.6551
 47    1.2822    0.9385    1.2936    1.0424    4.5673    2.8321
 48    0.9931    0.3761    1.0340    0.4641    5.3132    2.3248
 49    1.2048    0.5193    1.5093    0.6728    5.4509    2.5123
 50    1.3874    0.7926    1.6282    0.9753    3.0975    2.4007
 51    2.8408    1.2537    3.6023    1.3114   16.5349    2.0201
 52    2.8760    1.4201    0.4706    1.1424   17.9522    4.1969
 53    2.6107    0.8483    1.4532    1.0287   32.6418    2.2040
 54    1.9625    2.8233    0.4722    0.8411   45.1040    6.4254
 55    2.3658    0.9659    1.1903    1.0059   27.9129    3.0507
 56    3.4179    1.0528    1.5270    1.0364   41.3517    1.7210
 57    0.8213    0.4058    1.0250    0.5007    2.1937    1.2347
 58    0.9261    0.4266    1.1050    0.4024    1.0735    1.3487
----------------------------------------------------------------
 All   1.7308    0.9547    2.3442    1.3252   15.7028    2.5775
----------------------------------------------------------------

Using a different train and test dataset:

----------------------------------------------------------------
  #    X err     Y err     X std     Y std     T err     R err
----------------------------------------------------------------
Removing collection 8 -> pattern was not found in sensor left_camera (must be found in all sensors).
  0    3.0599    1.6389    2.8854    1.3636   11.3520    2.5225
  1    2.6594    1.2748    3.1347    1.5788    7.5405    2.7860
  2    1.2852    0.8692    0.9081    0.5945    6.2478    1.0328
  3    0.7217    0.7012    0.8163    0.7589    5.1536    1.0569
  4    0.7636    0.8152    0.8270    0.8756    1.0668    1.1561
  5    1.0919    1.6400    1.1195    1.0028   10.5572    1.2960
  6    0.9944    1.3278    1.2600    1.2804    8.9805    1.6756
  7    1.4239    0.8289    1.2920    0.9460   19.7965    2.3024
  9    2.1891    1.8362    2.0510    1.3092   10.2105    2.1970
 10    2.0770    1.1694    2.1681    1.2129   14.7617    2.2263
 11    2.1236    1.1421    1.7764    1.0828   16.0720    2.4410
 12    1.0833    1.0876    1.0781    0.8341   20.8761    1.3710
 13    1.0648    0.9773    1.1509    0.7809   22.0828    1.4194
 14    6.3392    2.0956    2.8616    1.3613    5.9792    2.7499
 15    6.2965    2.2514    2.9532    1.5395    6.9111    2.8429
 16    6.4802    2.8173    2.9233    1.4478    7.7617    2.7385
 17    5.9194    2.0647    2.4853    1.1595    6.9303    2.6753
----------------------------------------------------------------
 All   2.8494    1.4645    3.3029    1.4650   10.7224    2.0288
----------------------------------------------------------------

The e_t is really high, right?...

aaguiar96 commented 3 years ago

I leave here the train and test dataset if you want to test:

Train: https://drive.google.com/file/d/1SDViW9Cn3onGxh1CfF0fUXLWENAl9JHk/view?usp=sharing Test: https://drive.google.com/file/d/1jBC3M4Mfp0KFJmmI9aXSQvcHkU0GYQDv/view?usp=sharing

eupedrosa commented 3 years ago

The e_t is in millimeters. Is 1.5cm bad? Maybe.. But for me an angular error of 2 degrees is worst.

Is this a full calibration? With the 50 collections that I have , I have lower errors. I cannot show you the actual results right now, I will not be at home until Wednesday.

From what I see, the overall error are higher compared with my dataset. You may have a lot of unsychronized images.

To summarize, it is possible to have better results.

aaguiar96 commented 3 years ago

With @eupedrosa's dataset:

---------------------------------------------------------------------------
  #     RMS      X err     Y err     X std     Y std     T err     R err
---------------------------------------------------------------------------
Removing collection 8 -> pattern was not found in sensor left_camera (must be found in all sensors).
  0      -       1.4038    0.3799    0.4024    0.3758    4.6177    0.1089
  1      -       0.5578    0.6314    0.3804    0.4334    2.8866    0.3397
  2      -       1.3008    0.5005    0.3412    0.3427    4.0612    0.4612
  3      -       0.3781    0.3238    0.3063    0.3544    1.1428    0.1604
  4      -       0.5411    0.3708    0.3696    0.3153    2.0345    0.4958
  5      -       0.5796    2.1175    0.6743    0.8670    1.9104    0.5079
  6      -       0.8074    0.8925    0.6942    0.8677    3.9594    0.5208
  7      -       0.3423    0.2808    0.4321    0.2638    2.0988    0.3916
  9      -       0.3269    0.7726    0.3131    0.4998    0.5193    0.2527
 10      -       0.2924    0.3966    0.2534    0.4638    1.6237    0.1373
 11      -       0.2203    0.2704    0.2669    0.3190    1.6119    0.1329
 12      -       0.2244    0.3008    0.2771    0.3052    0.6678    0.1005
 13      -       0.2304    0.2988    0.2859    0.2465    1.3464    0.1831
 14      -       0.4952    0.3534    0.2690    0.3014    1.2081    0.0496
 15      -       0.5616    0.4572    0.4920    0.4971    1.9562    0.3371
 16      -       1.2512    0.4710    0.3888    0.5226    4.0252    0.2861
 17      -       0.4261    0.3794    0.2627    0.3233    1.0822    0.0628
---------------------------------------------------------------------------
 All   0.7488    0.6047    0.5000    0.7479    0.7055    2.1619    0.2664
---------------------------------------------------------------------------

It is actually much better! I'll check if it also works well in LiDAR-camera. If so, I'll use this one as train dataset.

miguelriemoliveira commented 3 years ago

Yep, these results look nice. These are with Eurico's dataset for training and another for testing right?

How about Eurico for training and Eurico for testing?

I suggest using SI units for everything, lets use meters rather than millimeters.

Great work.

aaguiar96 commented 3 years ago

These are with Eurico's dataset for training and another for testing right?

Yes @miguelriemoliveira

How about Eurico for training and Eurico for testing?

---------------------------------------------------------------------------
  #     RMS      X err     Y err     X std     Y std     T err     R err
---------------------------------------------------------------------------
  0      -       0.2392    0.3404    0.2616    0.3564    1.5434    0.6921
  1      -       0.2623    0.3239    0.2722    0.3501    1.3669    0.3858
  2      -       0.2737    0.2821    0.2577    0.2697    0.7820    0.2009
  3      -       3.9040    3.5761    0.4672    0.8878    8.4321    0.4038
  4      -       5.6482    2.4720    1.2047    0.9604   15.1565    0.8495
  5      -       0.4277    0.9966    0.5158    0.4249    4.1597    1.0714
  6      -       0.2675    0.3498    0.3100    0.2984    0.7565    0.0497
  7      -       0.5288    1.2446    0.6389    0.8211    9.6380    2.3618
  8      -       0.2741    0.3378    0.3400    0.2509    1.0524    0.2711
  9      -       0.6278    0.8517    0.3688    0.3168    2.8816    0.4804
 10      -       0.3683    0.3938    0.3185    0.3733    1.2767    0.2680
 11      -       2.0295    3.3233    0.8126    0.3769    7.3986    0.4634
 12      -       1.2013    1.0595    1.0577    0.7042    4.4798    0.6004
 13      -       0.4230    0.2731    0.3680    0.3407    1.1920    0.2454
 14      -       0.3897    0.4783    0.3969    0.2364    1.9420    0.1555
 15      -       0.2227    0.7032    0.2660    0.2780    1.4714    0.1787
 16      -       0.3817    0.3125    0.3267    0.2919    0.7276    0.0724
 17      -       0.2691    0.2474    0.3335    0.3071    0.2846    0.0849
 18      -       0.5881    0.6933    0.6900    0.6943    1.6005    0.6717
 19      -       0.6773    0.8734    0.3041    0.3710    2.4079    0.2381
 20      -       0.2287    0.4170    0.2684    0.6403    1.9068    0.3138
 21      -       0.2008    0.1839    0.2401    0.2065    0.5021    0.0676
 23      -       0.1993    0.2263    0.2491    0.2378    1.4553    0.1150
 24      -       4.8316    0.6723    0.9215    0.6927   16.2562    2.7349
 25      -       0.5840    0.7400    0.5456    0.6713    4.9895    1.6760
 26      -       1.0870    1.3708    0.7292    0.5529    5.0671    1.0176
 27      -       0.4722    0.4310    0.6275    0.3531    0.6440    0.5892
 28      -       0.3985    0.2591    0.5805    0.3407    1.8241    0.6722
 29      -       2.6007    2.1415    3.0583    2.4830    9.0798    1.8350
 30      -       0.3940    0.4616    0.4647    0.5461    2.1259    0.6142
 31      -       0.5186    0.2478    0.2803    0.2571    1.6351    0.2824
 32      -       0.2508    0.3684    0.2820    0.3043    0.9830    0.3064
 33      -       0.1726    0.2962    0.2177    0.2521    0.7643    0.2044
 34      -       0.3683    1.0594    0.3175    0.3702    1.7316    0.2147
 35      -       0.2616    0.2644    0.2531    0.2977    1.6408    0.1989
 36      -       0.2460    0.2385    0.2788    0.2847    0.3909    0.1952
 37      -       0.3564    0.2876    0.2779    0.2913    0.9959    0.4722
 38      -       2.0643    1.1561    1.2332    1.1859   11.6498    0.1271
 39      -       0.3382    0.3806    0.3411    0.3081    0.5579    0.1132
 40      -       2.9110    1.0346    0.3562    0.3515    4.9794    0.2643
 41      -       0.2504    0.2994    0.2733    0.3571    1.0425    0.1318
 42      -       0.5550    0.7767    0.6738    0.4221    2.1897    0.6429
 43      -       0.3056    0.4245    0.3676    0.4422    1.2480    0.6361
 44      -       0.4877    0.5274    0.5878    0.6096    3.8250    0.8812
 45      -       0.6324    0.9106    0.6985    1.0501    6.9114    1.0805
 46      -       0.7015    0.4046    0.6731    0.4583    7.4209    0.7072
 47      -       0.9180    0.4177    0.3401    0.3057    2.5042    0.4187
 48      -       0.4544    0.7749    0.3233    0.3209    5.0273    0.1735
 49      -       0.2824    0.2696    0.3330    0.3281    3.2851    0.3434
 50      -       1.2803    1.1667    1.3962    1.1629    4.4123    0.9406
 51      -       0.3477    0.2528    0.4405    0.3121    5.0391    0.3396
 52      -       0.4188    0.4437    0.4803    0.3621    3.7104    0.1935
 53      -       0.2533    0.2168    0.2822    0.2433    1.2941    0.2716
 54      -       0.2520    0.2303    0.3021    0.2770    0.3158    0.2919
 55      -       0.2621    0.2315    0.3216    0.2672    0.4441    0.2658
---------------------------------------------------------------------------
 All   1.2465    0.7836    0.6866    1.4095    1.0573    3.3891    0.5287
---------------------------------------------------------------------------

I suggest using SI units for everything, lets use meters rather than millimeters.

And radians rather than degrees?

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira and @eupedrosa

Can anyone do a calibration with @eupedrosa's dataset using only the two cameras? Just to compare with the result I am getting. The calibration is taking too long, and visually does not seem perfect.

EDIT: I was running the calibration with roslaunch. rosrun fixed the issue! :)

miguelriemoliveira commented 3 years ago

Hi

I can try tomorrow morning

On Mon, Sep 7, 2020, 22:28 André Aguiar notifications@github.com wrote:

Hi @miguelriemoliveira https://github.com/miguelriemoliveira and @eupedrosa https://github.com/eupedrosa

Can anyone do a calibration with @eupedrosa https://github.com/eupedrosa's dataset using only the two cameras? Just to compare with the result I am getting. The calibration is taking too long, and visually does not seem perfect.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/221#issuecomment-688515500, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVWQKFRLRLQ4RA6MNA3SEVF6VANCNFSM4PLNBGOA .

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

I tried this:

rosrun atom_calibration calibrate -json /home/mike/datasets/agrob/agrob_01_07_2020/data_collected.json -csf 'lambda x: int(x) < 66' -ssf 'lambda name: name in ["left_camera","right_camera"]' -oi -sr 0.5 -v -ajf

with the final report:

Final errors:
Errors per sensor:
  left_camera 0.588897107388
  right_camera 0.575125976565
Sensor left_camera 0.588897107388
Sensor right_camera 0.575125976565
Saving the json output file to /home/mike/datasets/agrob/agrob_01_07_2020/atom_calibration.json, please wait, it could take a while ...

and it finished in under 1 minute ... if you want we can do a zoom to discuss it and debug your side ...

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira

Thanks. I figured out that was because I was executing the calibration with roslaunch. With rosrun it is much faster, and I add similar results.

miguelriemoliveira commented 3 years ago

Hum ... this is strange ... can you input the exact command you used for roslaunch?

Also, post the contents of your roslaunch (not sure if you have the latest generated version)

miguelriemoliveira commented 3 years ago

But the good news is its working!

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira and @eupedrosa

From you experience, does the stereocalibrate routine from opencv accepts partial detections? I am stuck with this error:

Traceback (most recent call last):
  File "/home/andreaguiar/Source/catkin_ws/src/atom/atom_evaluation/scripts/others/cv_stereo_calib.py", line 154, in <module>
    cvStereoCalibrate(objp, paths_s, paths_t)
  File "/home/andreaguiar/Source/catkin_ws/src/atom/atom_evaluation/scripts/others/cv_stereo_calib.py", line 96, in cvStereoCalibrate
    criteria=stereocalib_criteria)
cv2.error: OpenCV(4.2.0) /io/opencv/modules/calib3d/src/calibration.cpp:3351: error: (-2:Unspecified error) in function 'void cv::collectCalibrationData(cv::InputArrayOfArrays, cv::InputArrayOfArrays, cv::InputArrayOfArrays, int, cv::Mat&, cv::Mat&, cv::Mat*, cv::Mat&)'
> Number of object and image points must be equal (expected: 'numberOfObjectPoints == numberOfImagePoints'), where
>     'numberOfObjectPoints' is 88
> must be equal to
>     'numberOfImagePoints' is 52
miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

no, Opencv uses a chessboar only. So partial detections won't work here ... I can try to help tonight...

aaguiar96 commented 3 years ago

no, Opencv uses a chessboar only. So partial detections won't work here ... I can try to help tonight...

This is bad... Most of our collections are partial detections...

aaguiar96 commented 3 years ago

no, Opencv uses a chessboar only. So partial detections won't work here ... I can try to help tonight...

Ok, I will commit a script for the opencv stereocalibration soon so you can look at it

miguelriemoliveira commented 3 years ago

Ok, I will. @eupedrosa , after detecting the charuco board is it not possible to derive the coordinates of all points (some of them will be outside the image, but that's ok).

Does this exist?

aaguiar96 commented 3 years ago

How to run:

rosrun atom_evaluation cv_stereo_calib.py -json "/home/andreaguiar/Documents/datasets/eurico_dataset/data_collected.json" -lc "left_camera" -rc "right_camera"
miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

I was thinking about this question I asked @eupedrosa

@eupedrosa , after detecting the charuco board is it not possible to derive the coordinates of all points (some of them will be outside the image, but that's ok).

and I think it is possible. There is an alternative path which I also detail bellow.

Lets call this Option A:

After detection of the charuco board you have:

  1. The pose of the charuco w.r.t the camera
  2. The camera's intrinsics
  3. A description of the charuco

So now you can generate a list of 3D points in the local charuco reference frame which you then transform to the camera reference frame using 1. These 3D points are all the corners of the charuco, not just the visible ones.

Then, you project the 3D points in the camera's reference frame onto the image and get the pixel coordinates of all corners, which is what you need to insert into opencv.

One catch here: some corners will be outside of the image (e.g. the column coordinate is < 0 or > width). But I think you can input these coordinates into the opencv function anyway.

Catch number two: Before taking quite some time implementing this you should first try with made up coordinates just to see if opencv's function does accept out of the image coordinates (I am almost sure it does).

Catch number three: if you use our projection function it already tells you that pixels are outside the image. Use as appropriate.

https://github.com/miguelriemoliveira/OptimizationUtils/blob/250d960af1f2ba2338df3d300ae9abc8fdd1b9d9/OptimizationUtils/utilities.py#L485-L490

Concerning Option B:

the idea is to assume this is a limitation of opencv's stereo processing and use only complete collections. Since I was fixing #231 I found out that for @eupedrosa 's dataset we have 55 collections total, and 40 if we remove the complete collections. You can just calibrate with opencv using the 40 collections no?

aaguiar96 commented 3 years ago

I @miguelriemoliveira

Thanks for the inputs. I think I will go for option B. This is a limitation from OpenCV. So, I don't think we should fix it. Thanks again.

aaguiar96 commented 3 years ago

I @miguelriemoliveira and @eupedrosa

The script for OpenCV calibration is working. It returns intrinsics and cam2cam extrinsic calibration.

Now I have to generate a json file with these results right? My question is, the extrinsic results is directly from camera to camera. In the json, we do not have such transformation right? Do I have to compute the chain of tf's from base to each camera using the camera to camera tf?

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

The script for OpenCV calibration is working. It returns intrinsics and cam2cam extrinsic calibration.

that was fast!

Now I have to generate a json file with these results right?

Better said, you need to augment or update a json file which is given to the script (just like in the calibrate)

My question is, the extrinsic results is directly from camera to camera. In the json, we do not have such transformation right?

Right, we don't

Do I have to compute the chain of tf's from base to each camera using the camera to camera tf?

I would prefer that, yes. @afonsocastro tackled this in his stereo script but I don't remember if he did it inside the stereo calibrate script or later in the comparison script.

In any case it is better if all scripts output the same, so if you can transform the camera to camera transform into the sames we have for the calibrate I think its better.

aaguiar96 commented 3 years ago

Ok @miguelriemoliveira, I will try to do that.

Just to make sure I'm doing everything ok. Here's my approach:

  1. Read pattern complete detections from train json file.
  2. Use cv2.calibrateCamera for each camera to compute OpenCV's intrinsics estimations.
  3. Use cv2.stereoCalibrate with the intrinsics result to get the extrinsics.
miguelriemoliveira commented 3 years ago

Hi André,

I think 2 is not needed. You get the extrinsics from the input json file.

Miguel

On Thu, Sep 10, 2020, 10:03 André Aguiar notifications@github.com wrote:

Ok @miguelriemoliveira https://github.com/miguelriemoliveira, I will try to do that.

Just to make sure I'm doing everything ok. Here's my approach:

  1. Read pattern complete detections from train json file.
  2. Use cv2.calibrateCamera for each camera to compute OpenCV's intrinsics estimations.
  3. Use cv2.stereoCalibrate with the intrinsics result to get the extrinsics.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/221#issuecomment-690097472, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVW3MECKHLL2IQAZDPTSFCI4TANCNFSM4PLNBGOA .

aaguiar96 commented 3 years ago

Ok @miguelriemoliveira

eupedrosa commented 3 years ago

I actually think that @aaguiar96 should do the 3 steps. That way we can do a direct comparison with our solution that also computes intrinsics.

aaguiar96 commented 3 years ago

I actually think that @aaguiar96 should do the 3 steps. That way we can do a direct comparison with our solution that also computes intrinsics.

That was also my idea.

miguelriemoliveira commented 3 years ago

No need. If you want to optimize the instrinsics you can use the proper flags in the opencv stereocalibrate function. Check:

https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#stereocalibrate

If you want an opencv stereo which also estimates intrinsics then also use another which does not, so you have two opencv stereo approaches.

eupedrosa commented 3 years ago

No need. If you want to optimize the instrinsics you can use the proper flags in the opencv stereocalibrate function.

In a higher level we are doing the three steps, so I guess we are in the same page. The difference is that the implementation allows to do step 2 and 3 at the same time. Nonetheless, opencv is doing the intrinsic estimation.

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira and @eupedrosa

Sadly the training dataset has not any complete collection, so I cannot use it on OpenCV stereo calibration... Can I use the test dataset instead (that has some complete collections), and evaluate OpenCV also in the test dataset? Or, in alternative, can I use other dataset to calibrate OpenCV (such one from @eupedrosa)?

I know that this is not ideal, but I do not want to have to generate all the results again with other dataset, just because of OpenCV...