miguelriemoliveira / OptimizationUtils

A set of utilities for using the python scipy optimizer functions
GNU General Public License v3.0
6 stars 0 forks source link

Calibration results visualization #46

Open afonsocastro opened 5 years ago

afonsocastro commented 5 years ago

After some time struggling with the transforms problems, it is finally possible to compare the pixels of the image points with the reprojected points. For now, this comparison is only possible for one collection at a time (selected in the command line). The intrinsic parameters of the top_right_camera took from the Matlab stereo calibration are too different from what as expected... I still don't understand why that is happening. The transforms are good and the intrinsics from the other camera are good. Because of this, the points from the stereo calibration have a very large pixel offset.

Captura de ecrã de 2019-08-10 03-36-17

afonsocastro commented 5 years ago

There is a JSON file with the results from the stereo calibration from Matlab: test/sensor_pose_json_v2/matlab.json

miguelriemoliveira commented 5 years ago

Hi @afonsocastro

good work! Some comments:

fter some time struggling with the transforms problems, it is finally possible to compare the pixels of the image points with the reprojected points.

everyone does.

For now, this comparison is only possible for one collection at a time (selected in the command line).

It is ok for starters.

The intrinsic parameters of the top_right_camera took from the Matlab stereo calibration are too different from what as expected... I still don't understand why that is happening. The transforms are good and the intrinsics from the other camera are good. Because of this, the points from the stereo calibration have a very large pixel offset.

We should talk by phone. Are you available tomorrow? Can I call you? When?

Abraço, Miguel

afonsocastro commented 5 years ago

Hi @miguelriemoliveira, Yes, I am available. You can call me at any time starting from 11 a.m. Thank you for your help!

miguelriemoliveira commented 5 years ago

Hi @afonsocastro ,

after our phone talked I was searching. This could be helpful.

https://www.learnopencv.com/homography-examples-using-opencv-python-c/

afonsocastro commented 5 years ago

Hi @miguelriemoliveira ! In accordance with what we've talked, Matlab stereo calibration was left apart. Now, these are the results for the OpenCV homography finder versus our procedure.

This is already for all collections where the chessboard was detected for both of the cameras (16 collections). It gives a total of 768 points (48 points for each collection).

Captura de ecrã de 2019-08-13 19-49-00

It seems that 85% of the points are between the 30 pixels tolerance, i don't have sure if this is right... Beside this, both of the results are very similar, altought the red dots aren't so scattered. Waiting for some feedback

miguelriemoliveira commented 5 years ago

Hi @afonsocastro

So you managed to get the opencv findhomography working. That's great!

The results are quite nice. It seems that our approach gives better results.

Some questions / suggestions:

  1. give different colors to each collection. This should help us find out if there is one or two collections in particular which do not give good results (we could remove them) For this you will need to use colormaps. Here is an example:

https://github.com/lardemua/AtlasCarCalibration/blob/master/interactive_calibration/scripts/example_calibration_graph2.py

  1. You can put circles for our approach vs squares for the others, since color will be used to distinguish collections.

  2. My only concern is when talking about 30 pixels. Usually, average reprojection error is between 0.3 and 2 pixels. 30 is too much, but from what I understand 30 is the maximum reprojection error. Can you compute the average for comparison?

Great work!

afonsocastro commented 5 years ago

Hi @miguelriemoliveira, Here are the results of errors by collection and by both of the verification procedures. You can also see the average error (for all 16 collections) in pixels by each axis:

Captura de ecrã de 2019-08-15 01-00-07

AVERAGE ERROR (our calib): x = 15.9574686686 ; y = 14.4347419739

AVERAGE ERROR (openCV): x = 23.5568695068 ; y = 23.9388504028

As we can see, there are some collections that give bad results.... and the average error is far away from the 0.3 to 2 pix range that you talked about. So, as you suggested, I took off the 6 worst collections. Obviously, the average error decreased, but it is still around 9 and 10 pixels. This isn't very encouraging! Take a look at the graph and the results:

Captura de ecrã de 2019-08-15 01-59-50

AVERAGE ERROR (our calib): x = 9.35389404297 pix ; y = 10.2167816162 pix

AVERAGE ERROR (openCV): x = 11.2116719564 pix ; y = 15.2526662191 pix

Do you have any idea about what is happening? Anyway, our calibration procedure is better than the homography finder function of OpenCV which is very nice! :D

miguelriemoliveira commented 5 years ago

Hi @afonsocastro

First of all, results look very nice. Using the colormap really improves the quality of the graphics.

Some tips to improve further:

  1. legend: our approach -> proposed approach (that's how it will be on the paper)
  2. legend: "pixel error with our" should be removed, if needed this information should be in the title
  3. axes legend: y offset [pixels] -> y error (pixels) (also for x)
  4. Use a diferent colormap which does not have associated with it the idea of good or bad. In this one, it may appear as if the red ones are bad and the green ones are good, which is not true. Use something without red. I often use Pastel1 or Pastel2

https://matplotlib.org/users/colormaps.html

Now for the difficult part: Why is our absolute error so high?

The good news is that it should not be a problem with our approach since we have the same errors when using the opencv approach. So that leads me to consider the qualidty of the dataset.

Some ideas.:

  1. are we taking into account the distortion parameters?
  2. One of the cameras was not operating very well (very slow). Perhaps there is a de-synchronization effect which causes errors. Suppose one image is taken at time t, the other at time t+x, if the chessboard is moving since we assume all are from time t we get high reprojection errors. When taking collections, where you careful to select moments in which the chessboard was not moving?
  3. Can you try find homography with a stereo dataset from the internet?
  4. Can you try our approach with a stereo dataset from the internet?
  5. We should try to fix the frontal camera's low frame rate (We must take a new bag file)

We can speal by phone to try to determine a course of action.

afonsocastro commented 5 years ago

Well... good and bad news: I found a good stereo dataset from the internet! In this dataset, there are 9 pairs of photos where the chessboard is detected by both of the cameras, so we have 9 "collections". This dataset also contains information about the intrinsic parameters of the cameras. I create a specific script to develop the JSON file that is needed for the optimization procedure. Here's the calibration:

Captura de ecrã de 2019-08-16 22-23-13

If you zoom the 3D graph, you can see that the cameras are side to side, because of the stereo dataset. So, that was great news, because it allows us to make robust conclusions about where is the problem of the high error! The bad news is that, with this internet dataset, the results of the OpenCV homography finder are very good but the results of our proposed approach aren't within the desired limits:

Captura de ecrã de 2019-08-16 22-26-54

AVERAGE ERROR (our calib): x = 8.28955906997 pix ; y = 2.48023365162 pix

AVERAGE ERROR (openCV): x = 0.396058400472 pix ; y = 0.258330192095 pix

The triangles are so close to the graph origin that it is difficult to see them. So, my thoughts: 1 - The result visualization has no problem (because of this good result for the OpenCV with the model dataset). 2 - Our dataset has low quality, we really should take a new one. 3 - I still didn't find where is the problem of our calibration procedure, but I think that with this dataset sample it will be easier to find out.

miguelriemoliveira commented 5 years ago

Hi @afonsocastro

that's good news. We have a standard high quality dataset. Some comments:

AVERAGE ERROR (our calib): x = 8.28955906997 pix ; y = 2.48023365162 pix

yep, something is going wrong. We should be close to opencv's numbers.

AVERAGE ERROR (openCV): x = 0.396058400472 pix ; y = 0.258330192095 pix

Yes, these are the typical values.

The triangles are so close to the graph origin that it is difficult to see them. That will change once our approach has smaller errors, so let's not worry about it.

So, my thoughts: 1 - The result visualization has no problem (because of this good result for the OpenCV with the model dataset).

Not entirely sure. How do you compute the projection of the pixels? You don't take distortion into account do you? I think in our optimization procedure we do.

https://github.com/miguelriemoliveira/OptimizationUtils/blob/82906cf842d04d4aefa25f448d3401edfe4624cc/OptimizationUtils/utilities.py#L430

https://github.com/miguelriemoliveira/OptimizationUtils/blob/82906cf842d04d4aefa25f448d3401edfe4624cc/test/sensor_pose_json_v2/objective_function.py#L58

That could be a difference between the optimization and the visualization ...

2 - Our dataset has low quality, we really should take a new one.

Definitely. But lets stick with the "standard dataset" until we figure out what's wrong.

3 - I still didn't find where is the problem of our calibration procedure, but I think that with this dataset sample it will be easier to find out.

What is the reported error during the optimization?. I think if you use only cameras it is in pixels and you can directly compare. Is it bellow 1? If so then I think you visualization has something wrong. If not them the optimization is not well parameterized. Check this line:

https://github.com/miguelriemoliveira/OptimizationUtils/blob/82906cf842d04d4aefa25f448d3401edfe4624cc/test/sensor_pose_json_v2/main.py#L715

Great work!

afonsocastro commented 5 years ago

Yes, in our optimization we take distortion into account. In the results visualization, I didn't compute it because I don't know (yet) how to relate distortion with the pixels reprojection. I will think about it in order to get to some solution that makes sense.

Also, I don't know the influence of that scale factor... In this new dataset, all images have the same size so that shouldn't make any difference. I will study that as well.

For now, here are the results of the optimization procedure, only considering the cameras (for direct comparison pixels to pixels):

If ftol = 0.1:

Average error = 3.54325120363 ftol termination condition is satisfied.

If ftol = 0.02:

Average error = 3.49996586648 ftol termination condition is satisfied.

If ftol = 0.001:

Average error = 3.10085084103 ftol termination condition is satisfied.

Actually, they seem quite better, but not so good as the homography finder of OpenCV. I'm gonna sleep now, but tomorrow I will study the influence of the other parameters.

miguelriemoliveira commented 5 years ago

Hi,

A Sáb, 17/08/2019, 04:44, afonsocastro notifications@github.com escreveu:

Yes, in our optimization we take distortion into account. In the results visualization, I didn't compute it because I don't know (yet) how to relate distortion with the pixels reprojection. I will think about it in order to get to some solution that makes sense.

A simple test would be to run the optimization disregarding the distortion. Just implement a new "projectwithoutdistortion" function in utilities and use that one in the objective function.

Also, I don't know the influence of that scale factor... In this new

dataset, all images have the same size so that shouldn't make any difference. I will study that as well.

Not sure about this either...

For now, here are the results of the optimization procedure, only considering the cameras (for direct comparison pixels to pixels):

If ftol = 0.1:

Average error = 3.54325120363 ftol termination condition is satisfied.

If ftol = 0.02:

Average error = 3.49996586648 ftol termination condition is satisfied.

If ftol = 0.001:

Average error = 3.10085084103 ftol termination condition is satisfied.

Actually, they seem quite better, but not so good as the homography finder of OpenCV. I'm gonna sleep now, but tomorrow I will study the influence of the other parameters.

This should be the way. Just reduce ftol even more, e.g. 10-8, and you should see other criteria (gtol, xtol) become responsible for the termination of the optimization. Hopefuly the average error will reduce as well.

How long is theoprimuzaton running? How long the opencv?

You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/miguelriemoliveira/OptimizationUtils/issues/46?email_source=notifications&email_token=ACWTHVTBF3TDND76OGTDHYLQE5X3FA5CNFSM4IKYLVHKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4QCRUY#issuecomment-522201299, or mute the thread https://github.com/notifications/unsubscribe-auth/ACWTHVWFZBW4AXWSCXHWIDLQE5X3FANCNFSM4IKYLVHA .

afonsocastro commented 5 years ago

Hi! 1- scale factor: As I said, the scale factor should only be interesting when the two images have different sizes, which isn't the case in this new dataset. Here are the results of the errors without this scale factor: Captura de ecrã de 2019-08-30 15-01-20 AVERAGE ERROR (our approach): x = 9.08917763792 pix ; y = 7.08498429663 pix

AVERAGE ERROR (openCV): x = 1.09436138765 pix ; y = 0.618174376311 pix

They got worse for both methods, so I will leave the factor as a correction parameter.

2 - distortion: After using the objective function with the projectwithoutdistorcion, the results of our approach actually improved (x error about 2.1 pix, y error about 0.2 pix):

AVERAGE ERROR (our calib): x = 6.15383421345 pix ; y = 2.20037088276 pix

AVERAGE ERROR (openCV): x = 0.396058400472 pix ; y = 0.258330192095 pix

But they're not what is expected... Do you think that the inclusion of distortion parameters on the analysis of the results could be what is missing? If so, this test shouldn't give us the right errors already?

3 - time running: With ftool at 1e-3, the running time of the optimization is a bit more than 6 minutes. I think this is a lot.

miguelriemoliveira commented 5 years ago

Hi @afonsocastro ,

Sorry for the delayed response. I was finishing my vacation and decided to wait for the first day of work to think about this.

Good work. We are making very good progress!

1- scale factor: As I said, the scale factor should only be interesting when the two images have different sizes, which isn't the case in this new dataset. Here are the results of the errors without this scale factor:

AVERAGE ERROR (our approach): x = 9.08917763792 pix ; y = 7.08498429663 pix

AVERAGE ERROR (openCV): x = 1.09436138765 pix ; y = 0.618174376311 pix

They got worse for both methods, so I will leave the factor as a correction parameter.

OK, I don't understand this very well yet, we should discuss it in person.

2 - distortion: After using the objective function with the projectwithoutdistorcion, the results of our approach actually improved (x error about 2.1 pix, y error about 0.2 pix):

AVERAGE ERROR (our calib): x = 6.15383421345 pix ; y = 2.20037088276 pix

AVERAGE ERROR (openCV): x = 0.396058400472 pix ; y = 0.258330192095 pix

Hum, average error is "x error about 2.1 pix, y error about 0.2 pix" but in your numbers above its x= 6.1 and y= 2.2? It should be the same no?

But they're not what is expected... Do you think that the inclusion of distortion parameters on the analysis of the results could be what is missing? If so, this test shouldn't give us the right errors already?

Not sure, lets discuss.

3 - time running: With ftool at 1e-3, the running time of the optimization is a bit more than 6 minutes. I think this is a lot.

Yes, this should be enough to get to a very accurate result.

I suggest the following simple test.

Change the optimization code to project some fixed 3D point, and pass it trhough the pipeline to see to which pixel coordinates the point is transformed (taking the camera pose and intrinsics into consideration).

Next, do the same using the same 3D point and same camera pose and intrinsics in your evaluation code. The xpix ypix values for the projection should be the same (to the 8th or 9th decimal place).

I suspect these values are different, and that will explain the "bad" results we are getting.

We should meet this week. Is tomorrow or Wednesday ok for you?

Miguel

afonsocastro commented 5 years ago

Hi @miguelriemoliveira, I hope you had a great vacation, thanks for the continued help and feedback! About our meeting, yes. Tomorrow morning is ok for me, and Wednesday as well. Could it be tomorrow, at 10 a.m.?

miguelriemoliveira commented 5 years ago

Hi @afonsocastro

tomorrow 10 a.m.

See you then. Miguel

afonsocastro commented 5 years ago

For tomorrow discussion, If the results evaluation function works with the projected pixels (and not with the ground truth pixels) the OpenCV homography finder has a bigger error than our evaluation:

Captura de ecrã de 2019-09-02 23-58-04

AVERAGE ERROR (our calib): x = 7.49847713518 pix ; y = 1.81251488203 pix

AVERAGE ERROR (openCV): x = 11.7356160482 pix ; y = 6.17495087047 pix

afonsocastro commented 5 years ago

Hi! Good but bad news: In comparison with the OpenCV function (calibrate camera, in order to get the sensor-chessboard transform needed for our reprojection error procedure), our optimization has better results! These are the results after calibrating the sensors pose with 9 collections:

AVERAGE ERROR (our optimization): x = 8.23603048442 pix ; y = 1.97276852455 pix

AVERAGE ERROR (openCV calibrate camera): x = 22.6928228684 pix ; y = 3.23359887394 pix

Captura de ecrã de 2019-09-15 18-52-13

The bad news is that our code has some bugs. The results, with only one collection, show that the pixel error has got bigger in comparison to the 9-collection study. OpenCV calibrate camera function it actually got better, as it was expected:

AVERAGE ERROR (our optimization): x = 80.6875678168 pix ; y = 34.6731363932 pix

AVERAGE ERROR (openCV calibrate camera): x = 6.31464979384 pix ; y = 1.34845966763 pix

Captura de ecrã de 2019-09-15 20-07-23

I'm going to think about this, do you have any idea? Maybe some test to accurate where is the problem?

miguelriemoliveira commented 5 years ago

Hi Afonso,

Let's meet on Wednesday afternoon or Thursday morning or afternoon to search for the bug.

Are you available?

Miguel

On Sun, 15 Sep 2019 at 20:58, afonsocastro notifications@github.com wrote:

Hi! Good but bad news: In comparison with the OpenCV function (calibrate camera, in order to get the sensor-chessboard transform needed for our reprojection error procedure), our optimization has better results! These are the results after calibrating the sensors pose with 9 collections:

AVERAGE ERROR (our optimization): x = 8.23603048442 pix ; y = 1.97276852455 pix

AVERAGE ERROR (openCV calibrate camera): x = 22.6928228684 pix ; y = 3.23359887394 pix

[image: Captura de ecrã de 2019-09-15 18-52-13] https://user-images.githubusercontent.com/47828797/64925552-11a73280-d7ea-11e9-86d1-eb1a79738b29.png

The bad news is that our code has some bugs. The results, with only one collection, show that the pixel error has got bigger in comparison to the 9-collection study. OpenCV calibrate camera function it actually got better, as it was expected:

AVERAGE ERROR (our optimization): x = 80.6875678168 pix ; y = 34.6731363932 pix

AVERAGE ERROR (openCV calibrate camera): x = 6.31464979384 pix ; y = 1.34845966763 pix ![Captura de ecrã de 2019-09-15 20-07-23]( https://user-images.githubusercontent.com/47828797/64926376-8e3f0e80-d7f4-11e9-9f63-9902ce8ccb57.png

I'm going to think about this, do you have any idea? Maybe some test to accurate where is the problem?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/miguelriemoliveira/OptimizationUtils/issues/46?email_source=notifications&email_token=ACWTHVULDGQAZYCB3USSLXDQJ2HVDA5CNFSM4IKYLVHKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6XX2NY#issuecomment-531594551, or mute the thread https://github.com/notifications/unsubscribe-auth/ACWTHVXHYNCMORXHVEQC2PTQJ2HVDANCNFSM4IKYLVHA .

afonsocastro commented 5 years ago

Hi @miguelriemoliveira , yes I am available. Wednesday, at 2 pm?

Afonso

miguelriemoliveira commented 5 years ago

ok.

On Tue, 17 Sep 2019 at 00:00, afonsocastro notifications@github.com wrote:

Hi @miguelriemoliveira https://github.com/miguelriemoliveira , yes I am available. Wednesday, at 2 pm?

Afonso

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/miguelriemoliveira/OptimizationUtils/issues/46?email_source=notifications&email_token=ACWTHVSTBOB47LMOSUY5FWDQKAFZLA5CNFSM4IKYLVHKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD62YO2Q#issuecomment-531990378, or mute the thread https://github.com/notifications/unsubscribe-auth/ACWTHVRP5MNTSG4GTF2U3UTQKAFZLANCNFSM4IKYLVHA .

afonsocastro commented 5 years ago

Hi, after our meeting, I've implemented our conclusions in order to get also the comparison with the stereo calibration results of openCV function. For now, these are the results:

Captura de ecrã de 2019-09-27 17-20-10

AVERAGE ERROR (our optimization): x = 8.83484188127 pix ; y = 2.65536649728 pix

AVERAGE ERROR (openCV stereo calibration): x = 4.5144050504 pix ; y = 0.95115454403 pix

AVERAGE ERROR (openCV calibrate camera): x = 27.3388310185 pix ; y = 29.2516185619 pix

I remember that this is, as we know, a bad dataset because this pattern has rectangles instead of squares and we dont know the size of the rectangles. I will try tonight test it with our new dataset to see the results.

miguelriemoliveira commented 5 years ago

Hi @afonsocastro ,

just a correction (I am not sure you missed the email from Angel)

"a bad dataset because this pattern has rectangles instead of squares and we dont know the size of the rectangles."

We know the size of the rectangles, Angel sent a report witht the page number where this information is present.

Good work, Miguel

On Fri, 27 Sep 2019 at 17:26, afonsocastro notifications@github.com wrote:

Hi, after our meeting, I've implemented our conclusions in order to get also the comparison with the stereo calibration results of openCV function. For now, these are the results:

[image: Captura de ecrã de 2019-09-27 17-20-10] https://user-images.githubusercontent.com/47828797/65784946-3042ed00-e14b-11e9-8897-a85a7e31b18d.png

AVERAGE ERROR (our optimization): x = 8.83484188127 pix ; y = 2.65536649728 pix

AVERAGE ERROR (openCV stereo calibration): x = 4.5144050504 pix ; y = 0.95115454403 pix

AVERAGE ERROR (openCV calibrate camera): x = 27.3388310185 pix ; y = 29.2516185619 pix

I remember that this is, as we know, a bad dataset because this pattern has rectangles instead of squares and we dont know the size of the rectangles. I will try tonight test it with our new dataset to see the results.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/miguelriemoliveira/OptimizationUtils/issues/46?email_source=notifications&email_token=ACWTHVV7I52AES5VIZZ3BM3QLYX4BA5CNFSM4IKYLVHKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7ZNPVY#issuecomment-536008663, or mute the thread https://github.com/notifications/unsubscribe-auth/ACWTHVUQ4SDLPOQZUORK7ELQLYX4BANCNFSM4IKYLVHA .

miguelriemoliveira commented 5 years ago

I am really excited to see the results in a "good dataset" ...

miguelriemoliveira commented 5 years ago

... and this is already uniform in terms of comparison?

afonsocastro commented 5 years ago

Hi @miguelriemoliveira! After solving some founded problems, here are the results of the good dataset... They look very nice!! :+1: First of all, the first part of our approach (creating sensors pose first guess, labeling data and collecting snapshots) was specifically working only for 8x6 chessboards (the old chessboard). Now, it requires, as an input argument, the number of squares to create the original JSON file. So, now the code is more robust! (readme updated).

Here are the results:

Captura de ecrã de 2019-10-06 17-21-21

AVERAGE ERROR (our optimization): x = 0.148268815354 pix ; y = 0.188933897445 pix

AVERAGE ERROR (openCV stereo calibration): x = 0.161108901218 pix ; y = 0.221039052211 pix

AVERAGE ERROR (openCV calibrate camera): x = 0.180819144803 pix ; y = 0.216612541813 pix

I am very happy with these results because they all are within the expected ranges. More than that, our approach could reach to a better sensor configuration than the OpenCV tools!

It's important to remember that all of this is only for cameras and that now the square size was the real one (I think this fact is the major responsible for the difference in results, in comparison to the Internet dataset).

Some notes about this test: 1 - Time running: Our optimization ---> ~ 40 minutes openCV calibrate camera ---> 1 minute (maybe less) openCV stereo calibration ---> few seconds (quickest)

2 - All chessboard corners were taken into account (9x6=54). 29 collections were studied. Total studied points (for each procedure): 1566

3 - Our optimization worked with the distortion parameters, as the OpenCV tools. Results visualization did not (as always).

The comparison of the results is uniform: sensor 1 to chessboard transform was found using solvePnP (with the intrinsic parameters computed by each procedure, respectively). Sensor 2 to chessboard transform was taken directly from the final JSON file of each calibration (for stereo, it required the combination with the sensor1-chessboard tf, found by solvePnP).

As we can see in the results graph, collection 11 or 8 or 9 (not sure which) was the worst, by far, for all approaches. I can run all over again without this collection to see if the results got better. Or even run our opt only with the 4 chessboard corners, to test that hypothesis that we had talked about.

miguelriemoliveira commented 5 years ago

Hi Afonso,

these are very good news! The results look really amazing. And our approach is better than opencv? Fantastic.

You were very lucky - often you don't get such great results when you really need them. You had these great results just in time to shift entirely to the writing. Then, after the thesis is written, we can advance to the lidar case.

More comments bellow ...

Congratulations! Miguel

On Sun, 6 Oct 2019 at 18:21, afonsocastro notifications@github.com wrote:

Hi @miguelriemoliveira https://github.com/miguelriemoliveira ! After solving some founded problems, here are the results of the good dataset... They look very nice!! 👍 First of all, the first part of our approach (creating sensors pose first guess, labeling data and collecting snapshots) was specifically working only for 8x6 chessboards (the old chessboard). Now, it requires, as an input argument, the number of squares to create the original json file. So, now the code is more robust! (readme updated).

Great. And readme updated? You are learning :-) .

Here are the results:

[image: Captura de ecrã de 2019-10-06 17-21-21] https://user-images.githubusercontent.com/47828797/66272141-c36fd700-e85d-11e9-930f-ad811adb83d9.png

AVERAGE ERROR (our optimization): x = 0.148268815354 pix ; y = 0.188933897445 pix

AVERAGE ERROR (openCV stereo calibration): x = 0.161108901218 pix ; y = 0.221039052211 pix

AVERAGE ERROR (openCV calibrate camera): x = 0.180819144803 pix ; y = 0.216612541813 pix

I am very happy with this results because they all are within the expected ranges. More than that, our approach could reach to a better sensor configuration than the openCV tools!

Yes, exatcly what we expected! 0.2 pixels is a very good calibration. And ours being better is the cherry on top of the cake ...

It's important to remember that all of this is only for cameras, and that now the square size was the real one (I think this fact is the major responsible for the difference in results, in comparison to the Internet dataset).

Yes, you are right. Thinking in retrospect, we were asking the optimizer to find an impossible solution.

Some notes about this test: 1 - Time running: Our optimization ---> ~ 40 minutes

That's a lot. But if it works, great. Time is not the important part. Also, perhaps you used the visual graphics part. Without graphics it goes much faster.

openCV calibrate camera ---> 1 minute (maybe less) openCV stereo calibration ---> few seconds (quickest)

2 - All chessboard corners were token into account (9x6=54). 29 collections were studied. Total studied points (for each procedure): 1566

This is already a large optimization problem. Hence the 40 minutes. Please explain why you think the opencv's solutions are much faster.

3 - Our optimization worked with the distortion parameters, as the openCV tools. Results visualization did not (as allways).

Not a priority, but this could make the values of error even smaller.

The comparison of the results is uniform: sensor 1 to chessboard transform was found using solvePnP (with the intrinsic parameters computed by each procedure, respectively). Sensor 2 to chessboard transform was token directly from the final json file of each calibration (for stereo, it required the combination with the sensor1-chessboard tf, found by solvePnP).

This is another great news. Doing stuff in uniform manner is the way to go.

As we can see in the results graph, collection 11 or 8 or 9 (not sure which) was the worst, by far, for all approaches. I can run all over again without this collection to see if the results got better.

Good idea. This is done for example when using matlab calibration toolbox. But also not a priority.

Or even run our opt only with the 4 chessboard corners, to test that hypothesis that we had talked about.

Yep, you could try it to see how much faster it goes, and how much accuracy it looses.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/miguelriemoliveira/OptimizationUtils/issues/46?email_source=notifications&email_token=ACWTHVXX5OYHLTZPLQGGBBTQNINABA5CNFSM4IKYLVHKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAOPGIA#issuecomment-538768160, or mute the thread https://github.com/notifications/unsubscribe-auth/ACWTHVWQ5F6I64TS25DJGQLQNINABANCNFSM4IKYLVHA .

afonsocastro commented 5 years ago

Hi! 1- I run out the optimization without the visual graphics part and it toked less than a minute to finishing it. All 40 minutes mentioned before were only because of that, once this test was made with exactly the same studied points number.

2 - I tried our calibration and openCV calibrations without collection 11 to see if the results would get better and, for my surprise, the optimization results got a bit worse... stereo and camera calibration actually improve their errors:

Captura de ecrã de 2019-10-08 12-43-45

Total studied points (for each procedure): 1512

AVERAGE ERROR (our optimization): x = 0.14696353327 pix ; y = 0.238686718007 pix

AVERAGE ERROR (openCV stereo calibration): x = 0.13414824955 pix ; y = 0.132025965938 pix

AVERAGE ERROR (openCV calibrate camera): x = 0.112190791539 pix ; y = 0.13600612822 pix

3 - Test of optimization using only the chessboard four corners as residuals (also without collection 11):

Captura de ecrã de 2019-10-08 12-51-00

AVERAGE ERROR (our optimization): x = 0.398715105006 pix ; y = 0.290294707768 pix

AVERAGE ERROR (openCV stereo calibration): x = 0.13414824955 pix ; y = 0.132025965938 pix

AVERAGE ERROR (openCV calibrate camera): x = 0.112190791539 pix ; y = 0.13600612822 pix

As we can see, there was an error increase in optimization. This average error is computed using all the chessboard corners, which makes me conclude that the final sensor pose is not so accurate as of all-corners calibration. Time elapsed was similar to the previous experiment, so the difference is only about the graphics