Closed allsthe011 closed 1 year ago
The pursuit of calibration precision can be time-consuming.
Thanks for this! What if I was only interested in getting the radial distortion coefficients (k1, k2, k3)? How would I do that using my own camera images?
This method directly corrects the distorted images which means it outputs the correction coefficients rather than radial distortion coefficients. If you wanna correct you images, you only need to train a distortion correction model. But before that, you need to generate some ground truth data including distorted images and original images using `DataGenerator.py'. Note: for better performance, synthetic dataset should be closed to the real one, i.e. chessboard size and background.
Understood. Though, would it be possible to reverse-engineer the correction coefficients to obtain the radial distortion coefficients?
Actually, it's hard to acquire exact accurate radial distortion coefficients as the radial model we used is not completely reversible.
Got it. Would you say that the radial distortion coefficients obtained from OpenCV's camera calibration are accurate/sufficient enough?
If your images only got slight distortion, then yes. And I recommend to use Matlab which is more stable than OpenCV.
Will take note of that, thanks. Also, would it be advisable to perform image sharpening on images before camera calibration to give OpenCV or Matlab more pixel/image data for sub-pixel corner detection?
Not recommended. As the sharpening may change the original corner locations, I think.
I see. I've used a combination of image super-sampling followed by image shrinking for the implementation I'm attempting. And based on my testing, there were minor improvements in the reprojection error - but improvements, nonetheless.
Would you recommend that I still continue my implementation given the results of testing?
That's pretty good. But the exact corner locations need to be kept during the processing. I recommend that you can do some test on ground truth data.
What ground truth data should I perform testing on? And how do I obtain/generate that ground truth data? (I apologize, I'm still a bit of a novice in this space)
You can use the `DataGenerator.py' to generate chessboard image with GT corner coordinates(no need to use the simulator camera to project, but directly draw the image), and then perform your operation and achieve the processed image(img_0). If you have the GT corner coordinates after processing(like after image super-resolution, the GT coordinates is linearly changed), you can draw the chessboard image again(img_1), and compare these two image(img_0&1) to see the difference. If the corner location is significantly changed, the image difference will be obvious.
Got it. Here's an example of 3 three corner locations from my testing (after sub-pix):
Calibration on Regular Images: [746.4545 425.5057 ], [712.4864 425.4922 ], [678.32355 425.57083] Calibration on Processed Images: [746.4544 425.51306], [712.48346 425.4849 ], [678.3135 425.57437]
Reprojection Error: Calibration on Regular Images: 0.02410773208254674 Calibration on Processed Images: 0.02379032051181541
Are these corner location differences, and reprojection error differences significant?
No, it's minor.
Minor for both the corner location differences and the reprojection error?
Would having an implementation that results in a 1% RPE improvement (such as this) have any merit if it were the focal point of a research or study?
The corner is minor, and the rpe is hard to say. As the reprojection error does not directly reflect the accuracy of calibration, it's better to apply more metrics to show the improvement if possible, like some application accuracy improvement (SfM or SLAM). Or you can compare the intrinsic parameter error like our paper.
Got it. Thanks so much for your help!
I have a webcam I would like to perform calibration on. Calibration on real data was mentioned in the paper, but there doesn't seem to be away to implement that using the code.
Thanks in advance!