lardemua / atom

Calibration tools for multi-sensor, multi-modal robotic systems
GNU General Public License v3.0
245 stars 28 forks source link

Generate camera-to-camera error metrics script #221

Closed aaguiar96 closed 3 years ago

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira and @eupedrosa

This also has to done right? Should I reuse some code? Which script?

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

I think you can use this:

https://github.com/miguelriemoliveira/OptimizationUtils/blob/master/test/sensor_pose_json_v2/results_visualization.py

and

https://github.com/miguelriemoliveira/OptimizationUtils/blob/master/test/sensor_pose_json_v2/stereocalib_v2.py

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira

I started this script this evening. I think we're moving on. The script receives the test and train json files, the source and target sensors (both cameras), and has also a flag to show images. The program already:

If you want to work a little on this, I've just committed the code. If you have some progresses let me know. Otherwise I'll continue this on monday! :+1:

aaguiar96 commented 3 years ago

To run:

rosrun atom_evaluation camera_to_camera_evalutation.py -train_json /home/andre-criis/Documents/saved_datasets/31-07/data_collected.json -test_json /home/andre-criis/Documents/saved_datasets/31-07/data_collected.json -ss "right_camera" -ts "left_camera" -si

I was testing with the same json for train and test, but they should be different files of course.

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

I tested it and it is working fine. Some comments:

Reads the corner detections of both cameras for each collection from the test json file

You can read the detections in the target sensor's image and draw them differently, for example as squares like I do here. Then we will know the dots (projections of detections from the source sensor) should be insider the squares (ground truth detections in the target sensor)

image

Computes the source camera pose w.r.t. pattern using solvePnP for each collection

What you need to do is to read the detections for the source sensor, then you will project these detections (using the transformations and intrinsics contained in the training dataset) to the target sensor. Follow the RAS paper

https://www.sciencedirect.com/science/article/abs/pii/S0921889020303985

section 4.1

Good work. We are advancing. Let me know if you need some help. If I have time I will take a look at the paper.

miguelriemoliveira commented 3 years ago

The code I told you about is in an old version of optimization utils. Take a look here

https://github.com/miguelriemoliveira/OptimizationUtils/blob/b28119bcb659ac5d177f5f9e6158c19e19c405ae/test/sensor_pose_json_v2/results_visualization.py

The code would compute the results for all approaches, but I think it is better if it does it for a single approach at the time (as you are doing here)

aaguiar96 commented 3 years ago

Hi @miguelriemoliveira, thanks for the tips.

This evening I was implementing the projection. I'm getting a result that I do not know if it's right and the calibration is bad, or if the projection has some error.

Here (projections in blue):

Screenshot from 2020-08-24 17-10-53 Screenshot from 2020-08-24 17-10-57 target_image_proj_screenshot_24 08 2020

I'll try to debug some more. If you can, take a look please!

aaguiar96 commented 3 years ago

Also, I will record the final train and test datasets and try another calibration to see if the error comes from the calibration itself.

aaguiar96 commented 3 years ago

Ok, I think it was error on the calibration. I calibrated only the two cameras: Screenshot from 2020-08-24 17-51-03

and now I got this:

Screenshot from 2020-08-24 17-51-29

Screenshot from 2020-08-24 17-51-51

So, it seems to be working! :)

aaguiar96 commented 3 years ago

Also, I recorded a new training dataset.

Nr of collections: 29 Bagfile: calibration_2020-07-01-10-08-10_0.bag File: train_dataset.zip

I tested a full calibration with all the collections, and the cameras do not seem to be well calibrated. Can you try and give me your feedback @miguelriemoliveira?

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

I will take a look at this tonight and try to help.

aaguiar96 commented 3 years ago

Ok @miguelriemoliveira

With the last commit, the script is already working and outputting the metrics! Check it out please. The only thing that is still not supported are the partial detections. This case is a little tricky because we have to associate the corners, and some of them will not have any association. Did you solve this before, or did you only consider full detections?

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

Sorry, fell asleep last night. Will look into this this afternoon.

For the partial detections your must use match the id field of each point, to make sure you are comparing only the same indices. Indices that are only detected in one camera should be discarded as it is not possible to compute the error in that case.

We do that in the objective function but in a somewhat cryptic fashion

https://github.com/lardemua/atom/blob/dca09cbcf2c7d475f9e02a0aa7f83ebaf2f27df1/atom_calibration/src/atom_calibration/calibration/objective_function.py#L109-L120

For the evaluation, since time is not a problem, I would do it more explicitly. For each index id in the src camera, search for the same index id in the target camera, if so compute error.

aaguiar96 commented 3 years ago

Ok thanks @miguelriemoliveira

I did not know that the indexes correspond, i.e., I thought they could have some random order. But nice, that way is simple. I'll add that feature and then I think the script is ready. :)

aaguiar96 commented 3 years ago

I @miguelriemoliveira

Implemented with the last commit. I think we're ready to generate the results.

eupedrosa commented 3 years ago

Hello @aaguiar96 and @miguelriemoliveira, I guess I am (almost) back from my vacations.

I found a bug here: https://github.com/lardemua/atom/blob/d7e626a90ccac847d7e18b5a70a4dbcb84176ae5/atom_evaluation/scripts/camera_to_camera_evalutation.py#L254

The axis should be axis=0, otherwise the standard deviation is not correctly calculated.

I used your dataset for trainning and my dataset with more than 50 images for testing, here are the results:

-------------------------------------------------------------------------------------------------------------
  #           X Error                  Y Error           X Standard Deviation     Y Standard Deviation
-------------------------------------------------------------------------------------------------------------
 54           5.7511                   1.7663                   6.2182                   1.8697
 42           7.0982                   3.2030                   0.9983                   0.4932
 43           6.6598                   2.8408                   0.8080                   0.4986
 49           2.2894                   5.2392                   1.9269                   0.8069
 52           3.6468                   5.2492                   3.7900                   1.7335
 53           5.1667                   1.6159                   5.6693                   1.6909
 24           5.3479                   1.6536                   4.7051                   1.8185
 25           4.7444                   1.5354                   4.6289                   1.7569
 26           8.0171                   1.9010                   4.8605                   1.8729
 27           5.2404                   1.2090                   4.8904                   1.2887
 20           6.5219                   2.2910                   1.0997                   0.8057
 21           6.5238                   2.0392                   1.2697                   0.3466
 48           2.3025                   5.0672                   1.9444                   0.7269
 23           6.3368                   2.0256                   1.1430                   0.6026
 46           3.3004                   4.8201                   1.3524                   0.7226
 47           1.9667                   4.6219                   1.5722                   0.8283
 44           6.6573                   2.2553                   0.9803                   0.6954
 45           7.7538                   2.9348                   0.6626                   1.1151
 28           4.6999                   1.1877                   4.9847                   1.2966
 29           4.3689                   4.0608                   5.1104                   3.1619
 40           7.7306                   1.1311                   2.1663                   0.9575
 41           5.2821                   1.9849                   1.9730                   0.9085
  1           5.1953                   3.6100                   1.4839                   0.7886
  0           5.6302                   3.5849                   1.1780                   0.6402
  3           4.1723                   9.3534                   4.5715                   1.9422
  2           5.0222                   3.7298                   1.3684                   0.7913
  5           3.7866                   2.8271                   4.4987                   1.9566
  4           6.8114                   7.5987                   3.9623                   2.2950
  7           3.3017                   4.9965                   3.8465                   2.2647
  6           3.3832                   3.4959                   3.9409                   2.0986
  9           3.1919                   4.2145                   3.7906                   1.6877
  8           3.2611                   3.6077                   3.7699                   2.0326
 51           3.2890                   5.8059                   3.3395                   1.5103
 39           5.0751                   1.7213                   2.1830                   0.9789
 38           4.2060                   2.9835                   1.9649                   0.9875
 11           6.1454                   2.5530                   6.5961                   2.4942
 10           3.2936                   3.1273                   3.8643                   1.6227
 13           4.1863                   1.7116                   5.2072                   1.7295
 12           4.3962                   1.2819                   5.3176                   1.5886
 15           4.9770                   2.1636                   5.9885                   1.8305
 14           3.9819                   2.1874                   4.8022                   1.9183
 17           5.0505                   2.0958                   6.2320                   1.9358
 16           4.8478                   1.9330                   6.0006                   1.8708
 19           5.7478                   1.6025                   1.2072                   0.7680
 18           6.5262                   2.9165                   1.4179                   0.5472
 31           3.9577                   3.2792                   4.9727                   1.8250
 30           4.1003                   3.1310                   5.1299                   2.0556
 37           5.7667                   2.0569                   1.8045                   0.9479
 36           6.5020                   2.8225                   0.7943                   0.6118
 35           6.2714                   2.7715                   0.8853                   0.6334
 34           6.2808                   1.7304                   0.6919                   0.8107
 33           3.7719                   2.7901                   4.4504                   2.2412
 55           5.5590                   1.8052                   6.1271                   1.8110
 32           3.5284                   2.9110                   4.2852                   1.9570
 50           3.8231                   5.1536                   3.6647                   1.9566

Can we say these are good results?

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

tried it. Really nice. I like the print. Some suggestions:

  1. You should add a key to stop showing images and just go to the end and print everything.
  2. You could sort the collections before testing, just so you go from 0 to end.

My concern now is the problem you mentioned that we had bad results when calibrating also with the lidar. @eupedrosa , can you test just training with the images like @aaguiar96 did?

miguelriemoliveira commented 3 years ago

Hi @eupedrosa , nice to have you back, but I can't resist remembering you saying "I will work the entire August " : - ).

aaguiar96 commented 3 years ago

Hi @eupedrosa

Thanks!

I found a bug here:

Ok nice. Did you push that fix?

Ok, you used a different train and test dataset and it works, that's nice. I do not have done that test yet, but I think that the number of collections on the train dataset must be equal or bigger of the number of collections in the test dataset, otherwise the script will break. This is because we use the transformations of the training json while iterating each collection on the test json. Is this a problem?

Also, @eupedrosa your dataset is old so the velodyne xacro is not updated and the velodyne has a rotation. I don't know if this has influence on the evaluation...

miguelriemoliveira commented 3 years ago

Not that I am complaining ... I also did almost nothing these last few weeks : - )

aaguiar96 commented 3 years ago

Hi @aaguiar96 ,

tried it. Really nice. I like the print. Some suggestions:

1. You should add a key to stop showing images and just go to the end and print everything.

2. You could sort the collections before testing, just so you go from 0 to end.

Thanks @miguelriemoliveira I'll add that features.

miguelriemoliveira commented 3 years ago

Hi @aaguiar96 ,

I do not have done that test yet, but I think that the number of collections on the train dataset must be equal or bigger of the number of collections in the test dataset, otherwise the script will break

This should not occur ... Can you pinpoint the line that causes this?

aaguiar96 commented 3 years ago

This should not occur ... Can you pinpoint the line that causes this?

I do not test yet, but the way I implemented I think this will occur. How did you avoid this @miguelriemoliveira? I go through the collections present in the test dataset to compute the error. For each collection of the test dataset, I extract the calibration result for the same collection in the training dataset.

miguelriemoliveira commented 3 years ago

Well, I think that all we want from the train dataset (the results.json) is the transformations that were marked to be calibrated.

In the train.json, you have this:

"calibration_config": {
    "anchored_sensor": "right_camera", 
    "bag_file": "$ROS_BAGS/agrob/calibration_2020-07-01-10-08-10_0.bag", 
    "calibration_pattern": {
      "border_size": {
        "x": 0.040000, 
        "y": 0.030000
      }, 
      "dictionary": "DICT_5X5_100", 
      "dimension": {
        "x": 11, 
        "y": 8
      }, 
      "fixed": false, 
      "inner_size": 0.045000, 
      "link": "chessboard_link", 
      "mesh_file": "package://atom_calibration/meshes/charuco_5X5_800x600.dae", 
      "parent_link": "base_link", 
      "pattern_type": "charuco", 
      "size": 0.060000
    }, 
    "description_file": "package://agrob_description/urdf/agrob.urdf.xacro", 
    "max_duration_between_msgs": 1000, 
    "sensors": {
      "left_camera": {
        "child_link": "zed_left_camera_frame", 
        "link": "zed_left_camera_optical_frame", 
        "parent_link": "zed_camera_center", 
        "topic_name": "/zed_nano/zed_node/left/image"
      }, 
      "right_camera": {
        "child_link": "zed_right_camera_frame", 
        "link": "zed_right_camera_optical_frame", 
        "parent_link": "zed_camera_center", 
        "topic_name": "/zed_nano/zed_node/right/image"
      }, 
      "vlp16": {
        "child_link": "vlp16_frame", 
        "link": "velodyne", 
        "parent_link": "tower_link", 
        "topic_name": "/velodyne_points"
      }
    }, 
    "world_link": "base_link"
  }, 

which means you can create a list of all transformations being optimized, as we do here:

https://github.com/lardemua/atom/blob/d7e626a90ccac847d7e18b5a70a4dbcb84176ae5/atom_calibration/scripts/calibrate#L291-L298

You will also have to think about the intrinsics, but lets think about that later.

Once you have a list of optimized transforms, you go to your test dataset and for all collections, replace only those optimized transformations by the ones that were contained in the train dataset. This means you are using the calibrated transformations.

Then you can happily iterate the collections of your test dataset and compute the reprojection using homography for each.

Makes sense?

miguelriemoliveira commented 3 years ago

Just calibrated @eupedrosa 's 55 collection dataset and evaluation gives poor results ... this is one of the best images I can have

image

Calibration seemed to run just fine, reporting camera errors below 1 pixel...

Optimization finished: `xtol` termination condition is satisfied.

Final errors:
Errors per sensor:
  left_camera 0.749841552146
  right_camera 0.84655484869
  vlp16 0.00686746135618
Sensor left_camera 0.749841552146
Sensor right_camera 0.84655484869
Sensor vlp16 0.00686746135618
Saving the json output file to /home/mike/datasets/agrob/agrob_01_07_2020/atom_calibration.json, please wait, it could take a while ...
Completed.

So I guess we have a bug ... but where? In the calibrate or the evaluation?

aaguiar96 commented 3 years ago

Well, I think that all we want from the train dataset (the results.json) is the transformations that were marked to be calibrated.

Yes makes sense @miguelriemoliveira, but we only have transformations per collection right? Which collection to choose in the training dataset?

aaguiar96 commented 3 years ago

So I guess we have a bug ... but where? In the calibrate or the evaluation?

Should be on evaluation due to the problem we're discussing or this:

Also, @eupedrosa your dataset is old so the velodyne xacro is not updated and the velodyne has a rotation. I don't know if this has influence on the evaluation...

miguelriemoliveira commented 3 years ago

Hi @aaguiar96,

All transformations marked for optimization should be replaced in all collections of the training dataset. Notice that since the transformations are being optimized, we are sure they are fixed, i.e. constant throughout all collections.

So for example, if you have optimized a base_to_camera1 transformation, then you can copy from your training dataset (any collection will be fine, they are all the same, in these cases we use the selected collection key (**) ) ... copy from there to all the collections in the test dataset.

(**) https://github.com/lardemua/atom/blob/d7e626a90ccac847d7e18b5a70a4dbcb84176ae5/atom_calibration/scripts/calibrate#L246-L249 )

aaguiar96 commented 3 years ago

Notice that since the transformations are being optimized, we are sure they are fixed, i.e. constant throughout all collections.

I was missing this. Now I got it. It's an easy fix, I'll commit in few moments. Thanks @miguelriemoliveira

miguelriemoliveira commented 3 years ago

Yes, on evaluation. This is good news : - )

Also, another thing: I calibrated with only cameras (again good final errors reported) and in the end I get the same large errors. This is good.

Are we using intrinsics and distortion ? Not sure ... I will call you .

aaguiar96 commented 3 years ago

Are we using intrinsics and distortion ? Not sure ... I will call you .

Yes, we are.

eupedrosa commented 3 years ago

For @miguelriemoliveira

Hi @eupedrosa , nice to have you back, but I can't resist remembering you saying "I will work the entire August " : - ).

I have no recollection of that....

So I guess we have a bug ... but where? In the calibrate or the evaluation?

Maybe not. Remeber, we are projecting from one camera to the other. The errors adds all the way and we have several sources of error:

Another error that is not accounted is the image synchronization. The only way to eliminate this source of error would be to have some sort of support for the chessboard. @aaguiar96 needs more hours of gym to reduce its jitter. OR, improve the data collector to improve the synchronization. A 300 ms betwwen two images with a moving target can easily create a problem.

For @aaguiar96

Ok nice. Did you push that fix?

No, I did not. Do you whant me to do it?

I do not have done that test yet, but I think that the number of collections on the train dataset must be equal or bigger of the number of collections in the test dataset, otherwise the script will break. This is because we use the transformations of the training json while iterating each collection on the test json. Is this a problem?

I do not know. It work without problems with the datasets that I used.

Also, @eupedrosa your dataset is old so the velodyne xacro is not updated and the velodyne has a rotation. I don't know if this has influence on the evaluation..

It may not influence.

aaguiar96 commented 3 years ago

No, I did not. Do you whant me to do it?

No, I'll do that.

I do not know. It work without problems with the datasets that I used.

I'm fixing this.

Another error that is not accounted is the image synchronization. The only way to eliminate this source of error would be to have some sort of support for the chessboard. @aaguiar96 needs more hours of gym to reduce its jitter. OR, improve the data collector to improve the synchronization. A 300 ms betwwen two images with a moving target can easily create a problem.

I agree with this @eupedrosa. I think I could try gym, but it will delay the paper... :-( So, if we are sure that this is having impact, maybe trying to have a support? A chair should work right?

miguelriemoliveira commented 3 years ago

I also agree a static pattern will improve things ... but I think this is not the fundamental problem here.

Look, during calibration we get sub pixel errors, if we run evaluation with that same train and test dataset then we must see the same magnitude of errors (if not the same). Do you agree?

aaguiar96 commented 3 years ago

Well, I think that all we want from the train dataset (the results.json) is the transformations that were marked to be calibrated.

Fixed with last commit.

aaguiar96 commented 3 years ago

Look, during calibration we get sub pixel errors, if we run evaluation with that same train and test dataset then we must see the same magnitude of errors (if not the same). Do you agree?

Yes, I think so. Did you test that @miguelriemoliveira ?

eupedrosa commented 3 years ago

Look, during calibration we get sub pixel errors, if we run evaluation with that same train and test dataset then we must see the same magnitude of errors (if not the same). Do you agree?

Yes, I agree, that is why I always had reservations about the Homography. Can I add "my" method to the evaluation script?

aaguiar96 commented 3 years ago

Can I add "my" method to the evaluation script?

Sure @eupedrosa.

miguelriemoliveira commented 3 years ago

Sure. Do it in a new script please.

On Tue, Aug 25, 2020, 17:45 André Aguiar notifications@github.com wrote:

Can I add "my" method to the evaluation script?

Sure @eupedrosa https://github.com/eupedrosa.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lardemua/atom/issues/221#issuecomment-680141754, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACWTHVVGLLB7FLYHK57XCVDSCPTCLANCNFSM4PLNBGOA .

eupedrosa commented 3 years ago

Before pushing any code. Here is some results with "my" approach:

Now the reverse:

miguelriemoliveira commented 3 years ago

Hi @eupedrosa ,

looks good. I would suggest push in a new script. Two questions:

  1. We talked about this before and you promised you would write some latex similar to the RAS paper section 4.1 talking about your metric, which I could never understand entirely. Are you going to?
  2. Do you consider the undistortion?
eupedrosa commented 3 years ago

I would suggest push in a new script.

Why? The changes are small. I added the "--po, --pattern_object" option to tell the script to use "my" method. Is this ok with you?

We talked about this before and you promised you would write some latex similar to the RAS paper section 4.1 talking about your metric, which I could never understand entirely. Are you going to?

I do not promised this kind o things.. But yes, I said I would do it. However, I think if I push the code and you analyse the source code you will understand immediately

Do you consider the undistortion?

Yes. But the correct question would be "Do you consider the distortion?"

aaguiar96 commented 3 years ago

Hi @eupedrosa

Great news, thanks! Did you detect any bug on my code? I will look at it again this afternoon.

eupedrosa commented 3 years ago

Yes @aaguiar96..

https://github.com/lardemua/atom/blob/596a71ca73d75805456cec13b2cd15bf7674bc24/atom_evaluation/scripts/camera_to_camera_evalutation.py#L87-L91

This actually does distortion and not undistortion

aaguiar96 commented 3 years ago

This actually does distortion and not undistortion

Ok, I did not validate that since it seems to work in the deprecated camera-to-camera evaluation script, that actually generated the RAS results. So, are you sure?... But, in any case, we can substitute that function with this one right?

eupedrosa commented 3 years ago

Ok, I did not validate that since it seems to work in the deprecated camera-to-camera evaluation script, that actually generated the RAS results. So, are you sure?...

The lines 89, 90 and 91 are the same piece of code that you can find in the projectToCamera function. @miguelriemoliveira any comment about this? If this is true, then RAS results may be slighly incorrect.

But, in any case, we can substitute that function with this one right?

Yes, do you wanna try it?

aaguiar96 commented 3 years ago

Yes, do you wanna try it?

Yes, I'll try!

eupedrosa commented 3 years ago

Sorry @aaguiar96, I already try it... You took to long to answer :p

aaguiar96 commented 3 years ago

Sorry @aaguiar96, I already try it... You took to long to answer :p

Eheh :-) And did it solve the issue?

eupedrosa commented 3 years ago

I guess so, take a look at the results

-------------------------------------------------------------------------------------------------------------
  #           X Error                  Y Error           X Standard Deviation     Y Standard Deviation
-------------------------------------------------------------------------------------------------------------
 54           0.3273                   0.3935                   0.4140                   0.3952
 42           0.4516                   0.5508                   0.5534                   0.3792
 43           0.2612                   0.3228                   0.2610                   0.3892
 49           0.2391                   0.4100                   0.2969                   0.3149
 52           0.3434                   0.2558                   0.3599                   0.3161
 53           0.2954                   0.3455                   0.3861                   0.3914
 24           6.2378                   0.9279                   1.5434                   1.0152
 25           0.5934                   0.6921                   0.4840                   0.7782
 26           0.6475                   1.3307                   0.7598                   0.5316
 27           0.9382                   0.5544                   0.5969                   0.6393
 20           0.3040                   0.4704                   0.3245                   0.6347
 21           0.2004                   0.3152                   0.2283                   0.2108
 48           0.6152                   0.3713                   0.2366                   0.2616
 23           0.2720                   0.3567                   0.3228                   0.2545
 46           0.8017                   0.2958                   0.5807                   0.3701
 47           0.9005                   0.3256                   0.4404                   0.3936
 44           0.4173                   0.6182                   0.4422                   0.5706
 45           0.9265                   0.9456                   0.7686                   1.0560
 28           0.8272                   0.4068                   0.5306                   0.4269
 29           2.3853                   2.0643                   2.8052                   2.2701
 40           1.9813                   0.6554                   0.2480                   0.3013
 41           0.6007                   0.4186                   0.3908                   0.4162
  1           0.2085                   0.3346                   0.2692                   0.3446
  0           0.2288                   0.3434                   0.2661                   0.3355
  3           4.0545                   3.9058                   0.3912                   0.7449
  2           0.1994                   0.2828                   0.2636                   0.2544
  5           0.4639                   0.6997                   0.5599                   0.4159
  4           5.2704                   2.8592                   1.4099                   1.1019
  7           0.6352                   1.4969                   0.7708                   0.9549
  6           0.2773                   0.2526                   0.3468                   0.3156
  9           0.5832                   1.0545                   0.3731                   0.4084
  8           0.3830                   0.2705                   0.4598                   0.3158
 51           0.2836                   0.4368                   0.3274                   0.3122
 39           0.6157                   0.2963                   0.3850                   0.3546
 38           1.9814                   1.4130                   1.5185                   1.3130
 11           2.5919                   3.4461                   0.8551                   0.3681
 10           0.3646                   0.2688                   0.3185                   0.3086
 13           0.3730                   0.4562                   0.4274                   0.5088
 12           1.4290                   1.3410                   1.0756                   0.8816
 15           0.3062                   0.4961                   0.3870                   0.3694
 14           0.5344                   0.2944                   0.4073                   0.3091
 17           0.4672                   0.3799                   0.3700                   0.4031
 16           0.3702                   0.5269                   0.4294                   0.3804
 19           0.7087                   0.9958                   0.3634                   0.3297
 18           0.5761                   0.6418                   0.6789                   0.7323
 31           0.4737                   0.5147                   0.3344                   0.2883
 30           0.3978                   0.3530                   0.3267                   0.3537
 37           0.3008                   0.2611                   0.2788                   0.3150
 36           0.2303                   0.2882                   0.2847                   0.3006
 35           0.2145                   0.3130                   0.2534                   0.2959
 34           0.3321                   1.2157                   0.3697                   0.3709
 33           0.2821                   0.2092                   0.3447                   0.2579
 55           0.3009                   0.3447                   0.3816                   0.4045
 32           0.3227                   0.2456                   0.3424                   0.2959
 50           1.4605                   1.1214                   1.6068                   1.3309
-------------------------------------------------------------------------------------------------------------
 All          0.8339                   0.7139                   1.5016                   1.1131
-------------------------------------------------------------------------------------------------------------

The results are very similar to "my" approach. The error is slighly higher, but that is expected.

aaguiar96 commented 3 years ago

Ok, great!

The results are very similar to "my" approach. The error is slighly higher, but that is expected.

Why? Because of the homography?

Can you push the code, please?