rameau-fr / MC-Calib

A generic and robust calibration toolbox for multi-camera systems
MIT License
377 stars 54 forks source link

Errors while calibrating cameras using a static camera rig setup #42

Closed hsurya08 closed 1 year ago

hsurya08 commented 1 year ago

System information (version)

Vision system

Describe the issue / bug

case-1:

0008150 | 2023-07-14, 09:37:58.300074 [info] - Board extraction done! 0008151 | 2023-07-14, 09:37:58.300204 [info] - Intrinsic calibration initiated 0008152 | 2023-07-14, 09:37:58.300231 [info] - Initializing camera calibration using images 0008153 | 2023-07-14, 09:37:58.300250 [info] - NB of board available in this camera :: 120 0008154 | 2023-07-14, 09:37:58.300267 [info] - NB of frames where this camera saw a board :: 120 terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(4.2.0) ../modules/calib3d/src/calibration.cpp:1568: error: (-5:Bad argument) For non-planar calibration rigs the initial intrinsic matrix must be specified in function 'cvCalibrateCamera2Internal'

Aborted (core dumped)

Case-2:

0008150 | 2023-07-14, 09:40:59.879276 [info] - Board extraction done! 0008151 | 2023-07-14, 09:40:59.879378 [info] - Intrinsic calibration initiated 0008152 | 2023-07-14, 09:40:59.879389 [info] - Initializing camera calibration using images 0008153 | 2023-07-14, 09:40:59.879398 [info] - NB of board available in this camera :: 0 0008154 | 2023-07-14, 09:40:59.879405 [info] - NB of frames where this camera saw a board :: 0 terminate called after throwing an instance of 'cv::Exception' what(): OpenCV(4.2.0) ../modules/calib3d/src/calibration.cpp:3681: error: (-215:Assertion failed) nimages > 0 in function 'calibrateCameraRO'

Case-3:

Screenshot from 2023-07-14 10-48-06

Aborted (core dumped)

Please provide a clear and concise description of what the issue / bug is. Hi, Thank you for the amazing work. I am using the MC-Calib package to calibrate my camera rig. I was able to run the files with you example dataset and configuration files provided but was not able to generate results with my camera system. While creating the image dataset for calibration, I was keeping the camera position constant (as shown in figure) and then moving the detection board to capture the images. My cameras position is fixed as shown in the image. However, When I am trying to run the calibration I am getting errors. When I set the resolution_x, resolution_y value as "500" it shows the error given in case-2 and with the resolution value of "600" it gives the error in case-1. For the case-1 I have tried to provide a intrinsic parameter file, however it gives me anther error as shown in case-3. I have attached the link to the dataset and also the config files for you reference. Please do help me debug this issue. Any comments or suggestion would be really helpful!

IMG_20230714_101613

rameau-fr commented 1 year ago

Thank you very much for your interest in our work. I have reproduced your error and solved it. The configuration file was inappropriately modified as the two first configuration lines should be

number_x_square: 5
number_y_square: 5 

Here is the entire configuration file I used:

%YAML:1.0

number_x_square: 5
number_y_square: 5
resolution_x: 500          # horizontal resolution in pixel
resolution_y: 500          # vertical resolution in pixel
length_square: 0.028        # parameters on the marker (can be kept as it is)
length_marker: 0.021       # parameters on the marker (can be kept as it is)
number_board: 2
boards_index: []           # leave it empty [] if the board index are ranging from zero to number_board; example of usage boards_index: [5,10] <-- only two board with index 5/10
square_size: 0.037         # size of each square of the board in cm/mm/whatever you want

############# Boards Parameters for different board size (leave empty if all boards have the same size) #################
number_x_square_per_board: []
number_y_square_per_board: []
square_size_per_board: []

######################################## Camera Parameters ###################################################
distortion_model: 0             # 0:Brown (perspective) // 1: Kannala (fisheye)
distortion_per_camera : []         # specify the model per camera, #leave "distortion_per_camera" empty [] if they all follow the same model (make sure that the vector is as long as cameras nb)
number_camera: 4           # number of cameras in the rig to calibrate
refine_corner: 1           # activate or deactivate the corner refinement
min_perc_pts: 0.5           # min percentage of points visible to assume a good detection

cam_params_path: "None" #"../configs/intrinsic.yml"   # file with cameras intrinsics to initialize the intrinsic, write "None" if no initialization available 
fix_intrinsic: 0 #if 1 then the intrinsic parameters will not be estimated nor refined (initial value needed)

######################################## Images Parameters ###################################################
root_path: "/home/MC-Calib/McCalib/data/Scenario1_trail2/"
cam_prefix: "Cam_"

######################################## Optimization Parameters #############################################
ransac_threshold: 10       # RANSAC threshold in pixel (keep it high just to remove strong outliers)
number_iterations: 1000    # Max number of iterations for the non linear refinement

######################################## Hand-eye method #############################################
he_approach: 1 #0: bootstrapped he technique, 1: traditional he

######################################## Output Parameters ###################################################
save_path: "../Results/"
save_detection: 1
save_reprojection: 1
camera_params_file_name: "test.yml" # "name.yml"

Other comments: The projection errors for "Sequence1" and "Sequence1_trail2" are a bit too high (~3 pixels). Please double check: 1) your camera synchronization, 2) the rigidity of the surface you use, 3) try to move the board smoothly and slowly to avoid motion blur (even if the images look okay), 4) Test with multiple sequences and run the code multiple times (as we have some randomness involved)

hsurya08 commented 1 year ago

Hi @rameau-fr, Thank you so much for the quick response and the solution. I was able to modify the config file and get the results. Also, regarding your comments, I am trying to follow those and currently there is an projection error of around 1.8 pixels now. However, when I am trying to analyze these results using the python scrips given in the python_utils folder, I am getting error for two files given below and rest of the scrips runs fine. I haven't modified anything other than the "path_root" variable to give corresponding path to the result file. I have also attached the file for your reference. Please check the error message and let me know your views on it.

Python scripts: https://drive.google.com/drive/folders/1TEbtIS-l7cXuGoSrd-_U12dccQ7JJHKl?usp=sharing

Error-1: sudo python3 Main_compute_pose_error_vs_gt.py

Screenshot from 2023-07-17 11-07-03

Error-2: sudo python3 main_display_cam_obj_pose.py

Screenshot from 2023-07-17 11-03-01

Query: My ultimate goal is to use your calibration tool to find the transformation between three realsense cameras (two infrared cameras on each so total six cameras) arranged as shown in the figure below. I was bit confused on the calibration process such as the number of boards required and how to capture images for each of the six cameras. It would be really helpful if you can give me any suggestions or the steps required to calibrate this system. Thanks again for your help. Note: The distance between the camera position is not accurate, I just gave an approximate value for your understanding of the camera rig. Screenshot from 2023-07-17 10-53-09

rameau-fr commented 1 year ago

Thank you for your reply; I am glad to see that your problem has been resolved.

Regarding the error, you got using the Python utils. I have to admit that these small pieces of code lack documentation and are poorly implemented with some outdated dependencies. We made these codes mostly to create the figures for our paper, and they are certainly very useful for the users of MC-Calib, so it would be great to simplify and refactor them completely.

Regarding your error-1: This code has been designed only to compare the calibration accuracy for synthetic data with a ground truth. Therefore, you do not need to run it as it does not apply to your case.

For error-2, we have modified the toolbox output to provide more information, but this function has not been updated accordingly. Here is a quick fix, change the line: obj_mat = fs.getNode(obj_id).getNode("points").mat() with: obj_mat = fs.getNode(obj_id).getNode("points").mat() obj_mat = obj_mat[:3,:]

Finally, the calibration of your system seems relatively complex as Realsense 2 and 3 are located quite close to each other, and you will maybe have trouble focusing on the boards. I would recommend a double-faced board (one different checkerboard on two sides of the plane) and maybe two checkerboards in front of Realsense 1. In your case, all the cameras should be mounted on a rigid structure. The checkerboards will be placed in the scene, and you can move your camera system for calibration. That's how I will address this problem.

hsurya08 commented 1 year ago

Hi @rameau-fr, Thank you for your prompt and informative response. I was able to solve the error-2 and get the plots using my calibration results. I greatly appreciate your guidance on calibrating our camera system. Also, apologies for the confusion regarding the position of camera 2&3, they would not be facing each other directly but will have an angle facing downwards as shown in the below figure. I assume this should solve the focus issue as I can use a detection board placed on the ground which is visible to both the cameras.

However, before proceeding with the calibration process, I have one more query: As shown in the below figure, I have an Motion capture system that gives the transformation from world frame to the Mo-cap frame. Similarly, we can have the transformation from the camera1 frame to the detection board. Using these two transformations we want to calculate the static transformation between the Mo-cap frame and the camera1 frame. Is it possible to achieve this using the MC-Calib repo? Please do let me know your thoughts on this. Any comments or suggestion would be really helpful!

Screenshot from 2023-07-18 17-50-29

rameau-fr commented 1 year ago

I am glad you managed to make everything work properly. I do not think MC-Calib will be able to resolve your problem directly for your motion capture system. Still, it might contain some tools that can be useful for your problem, such as hand-eye calibration strategy and optimization.

echoGee commented 1 year ago

I am glad you managed to make everything work properly. I do not think MC-Calib will be able to resolve your problem directly for your motion capture system. Still, it might contain some tools that can be useful for your problem, such as hand-eye calibration strategy and optimization.

@rameau-fr could you explain how the hand-eye calibration strategy and optimization maybe used for this situation?

rameau-fr commented 1 year ago

If the Motion-capture frame and all the cameras are attached to a single rigid structure; and if we assume that we can synchronize the images' acquisition with the motion-capture system. Then finding the transformation between the cameras and the motion frame will be expressed as a hand-eye estimation problem. For each calibration image, MC-Calib can provide you with the position of the vision system w.r.t. the calibration object. So it should be technically feasible to use these measurements to calibrate the transformation between the motion capture and the vision rig. My only real concern here would be the synchronization of these systems which I am unfamiliar with.

hsurya08 commented 11 months ago

Hi @rameau-fr,

Thank you for the valuable information. I have been trying to use the MC-Calib repository for calibrating my camera rig setup (shown in the figure) with four detection boards. I was able to get the calibration results with almost 2.44707 mean reprojection error. However when I try to visualize these results, it does not seem to be similar to the actual position. I tried different possibilities of calibration such as:

1) Calibrating two cameras at a time. 2) Modifying the he approach 3) Proving the intrinsic parameters file 4) Calibrating only infra camera (infra-1) of each d435 stereo camera.

Screenshot 2023-08-11 115643

However, none of these methods seems to be giving results similar to the actual setup (I have checked the results using both the "main_display_cam_obj_pose.py" file and manual tf calculation). I was not able to figure out the reason for the issue whether it is due to the improper positioning of detection boards or the dataset or the config file or if there is something else that I am doing incorrectly. I have attached the link to the dataset and also the config files for you reference. Please do help me debug this issue. Any comments or suggestion would be really helpful!

Drive link: https://drive.google.com/drive/folders/1XNc3x6g_zMchuh4cCBW0tM8foM2h5OcB?usp=sharing

rameau-fr commented 11 months ago

Hello,

I tried running the code on your data and could not obtain good results too. For certain cameras, we have barely any images where the board is detected due to: 1) out-of-focus images and 2) too-small patterns. Please try printing larger boards, or the detection of the charuco boards is not guaranteed. To me, it is the main cause of failure, given the images you provided to me. For instance, for cameras 05 and 06, I have just five images where the boards have been detected, which is insufficient to ensure proper calibration. You can also print boards with fewer squares to make the markers bigger (easier to be detected) but you will end up with a low number of points for the calibration. Please keep me in touch with your calibration process, I am always available to support you ;-)

hsurya08 commented 11 months ago

Hi,

Thank you for your message and the valuable insights you've shared. I appreciate your time and effort. It's clear from your explanation that the main issues affecting the calibration are the limited number of detected boards due to out-of-focus images and small patterns. Your suggestions to print larger boards or reduce the number of squares to make markers more easily detectable seem like practical steps to address these challenges. I'll definitely keep you updated on the progress of my calibration.

Thank you once again for your assistance and willingness to help. I'll reach out if I have any further questions or updates.