Easonyesheng / CCS

[RA-L&IROS22] A learning-based camera calibration system.
MIT License
31 stars 2 forks source link

How to do calibration on real camera data? #2

Closed allsthe011 closed 1 year ago

allsthe011 commented 1 year ago

I have a webcam I would like to perform calibration on. Calibration on real data was mentioned in the paper, but there doesn't seem to be away to implement that using the code.

Thanks in advance!

Easonyesheng commented 1 year ago

The pursuit of calibration precision can be time-consuming.

  1. You need to train a detector model with your own chessboard size and background using train_CornerDetect.py and DataGenerator.py;
  2. If distortion exists, you need train a model to correct it using train_DistCorr.py and DataGenerator.py;
  3. Perform calibration using calibration.py -> calib_by_RANSAC_practical() and tune your parameters;
allsthe011 commented 1 year ago

Thanks for this! What if I was only interested in getting the radial distortion coefficients (k1, k2, k3)? How would I do that using my own camera images?

Easonyesheng commented 1 year ago

This method directly corrects the distorted images which means it outputs the correction coefficients rather than radial distortion coefficients. If you wanna correct you images, you only need to train a distortion correction model. But before that, you need to generate some ground truth data including distorted images and original images using `DataGenerator.py'. Note: for better performance, synthetic dataset should be closed to the real one, i.e. chessboard size and background.

allsthe011 commented 1 year ago

Understood. Though, would it be possible to reverse-engineer the correction coefficients to obtain the radial distortion coefficients?

Easonyesheng commented 1 year ago

Actually, it's hard to acquire exact accurate radial distortion coefficients as the radial model we used is not completely reversible.

allsthe011 commented 1 year ago

Got it. Would you say that the radial distortion coefficients obtained from OpenCV's camera calibration are accurate/sufficient enough?

Easonyesheng commented 1 year ago

If your images only got slight distortion, then yes. And I recommend to use Matlab which is more stable than OpenCV.

allsthe011 commented 1 year ago

Will take note of that, thanks. Also, would it be advisable to perform image sharpening on images before camera calibration to give OpenCV or Matlab more pixel/image data for sub-pixel corner detection?

Easonyesheng commented 1 year ago

Not recommended. As the sharpening may change the original corner locations, I think.

allsthe011 commented 1 year ago

I see. I've used a combination of image super-sampling followed by image shrinking for the implementation I'm attempting. And based on my testing, there were minor improvements in the reprojection error - but improvements, nonetheless.

allsthe011 commented 1 year ago

Would you recommend that I still continue my implementation given the results of testing?

Easonyesheng commented 1 year ago

That's pretty good. But the exact corner locations need to be kept during the processing. I recommend that you can do some test on ground truth data.

allsthe011 commented 1 year ago

What ground truth data should I perform testing on? And how do I obtain/generate that ground truth data? (I apologize, I'm still a bit of a novice in this space)

Easonyesheng commented 1 year ago

You can use the `DataGenerator.py' to generate chessboard image with GT corner coordinates(no need to use the simulator camera to project, but directly draw the image), and then perform your operation and achieve the processed image(img_0). If you have the GT corner coordinates after processing(like after image super-resolution, the GT coordinates is linearly changed), you can draw the chessboard image again(img_1), and compare these two image(img_0&1) to see the difference. If the corner location is significantly changed, the image difference will be obvious.

allsthe011 commented 1 year ago

Got it. Here's an example of 3 three corner locations from my testing (after sub-pix):

Calibration on Regular Images: [746.4545 425.5057 ], [712.4864 425.4922 ], [678.32355 425.57083] Calibration on Processed Images: [746.4544 425.51306], [712.48346 425.4849 ], [678.3135 425.57437]

Reprojection Error: Calibration on Regular Images: 0.02410773208254674 Calibration on Processed Images: 0.02379032051181541

Are these corner location differences, and reprojection error differences significant?

Easonyesheng commented 1 year ago

No, it's minor.

allsthe011 commented 1 year ago

Minor for both the corner location differences and the reprojection error?

Would having an implementation that results in a 1% RPE improvement (such as this) have any merit if it were the focal point of a research or study?

Easonyesheng commented 1 year ago

The corner is minor, and the rpe is hard to say. As the reprojection error does not directly reflect the accuracy of calibration, it's better to apply more metrics to show the improvement if possible, like some application accuracy improvement (SfM or SLAM). Or you can compare the intrinsic parameter error like our paper.

allsthe011 commented 1 year ago

Got it. Thanks so much for your help!