Closed Lizomi closed 1 year ago
Thank you very much for reporting this problem. It seems that you have a problem when loading your precalibrated intrinsic parameters. It could be due to a wrong path or a typo in the *.yml file, as they are very sensitive to that. But I am more concerned by the results you obtained without fixing the intrinsic parameters. The focal lengths in the camera matrices are not coherent, so it might suggest that you suffer from another problem. Would it be possible to share your data (or a part of it) with me so that I can try running MC-Calib on it? For instance, via a temporary link, it would be perfect.
Thank you again for using this toolbox; your feedback is highly valuable to us to improve this work.
First sorry I pasted the wrong result, the actual result is:
%YAML:1.0
---
nb_camera: 2
camera_0:
camera_matrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 9.3130138190225057e+02, 0., 9.4019467776340048e+02, 0.,
9.2606583228346551e+02, 5.5256336101596389e+02, 0., 0., 1. ]
distortion_vector: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ 8.3884773391789888e-02, -6.6332202785836750e-02,
1.1175567025805114e-03, -6.3243808429203734e-03,
3.2262531950498510e-02 ]
distortion_type: 0
camera_group: 0
img_width: 1920
img_height: 1080
camera_pose_matrix: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [ 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0.,
1. ]
camera_1:
camera_matrix: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ 9.2508055921160314e+02, 0., 9.4917448708846428e+02, 0.,
9.2438051462862404e+02, 5.5538732674369214e+02, 0., 0., 1. ]
distortion_vector: !!opencv-matrix
rows: 1
cols: 5
dt: d
data: [ 1.0282138423473712e-01, -1.0737529543355753e-01,
-4.9332958631295270e-05, -2.5921587082319169e-03,
5.8968456396723191e-02 ]
distortion_type: 0
camera_group: 0
img_width: 1920
img_height: 1080
camera_pose_matrix: !!opencv-matrix
rows: 4
cols: 4
dt: d
data: [ 9.8832522190873440e-01, 1.8454810404927229e-02,
1.5123715056813519e-01, 2.5654265644555148e+00,
-7.2221073233336517e-03, 9.9719579925113933e-01,
-7.4487442711451926e-02, 9.0133405564959954e-02,
-1.5218770287004416e-01, 7.2525567414531630e-02,
9.8568704220271619e-01, -2.2526504290470609e-01, 0., 0., 0.,
1. ]
And I believe the precalibrated intrinsic file path is not the reason, because wrong path will cause this error: [fatal] - Camera parameters path 'data/intrinsic.yml' doesn't exist.
The camera pictures and config file is here: data.7z
Thank you very much; it makes much more sense now. I can see a few issues in your precalibrated data.
Usually, the principal point is located near the center of your image. Your image size seems to be 1920x1080, so it is highly improbable that the center is located at the pixel location [330, 330]. On the contrary, the calibration toolbox provides some realistic estimates of your camera parameters.
What is the distortion model provided in your original calibration? The values seem particularly high (especially compared to the Brown model output provided by MC-Calib).
Could you please try the following: 1-Calibrate the cameras using MC-Calib (no fixed parameters) 2- Save the results in a yml file 3- Try to calibrate with fixed intrinsic using the result yml file generated during the previous calibrations This test will reveal if the problem is related to our code itself of the data stored in the intrinsic input file.
I would have a few questions, where did you find these calibration results? I believe you need additional information to understand exactly what camera they belong to. In such kind of active vision system, the calibration is often provided for the infrared camera, so it might be confusing.
I'm using Azure Kinect DK which contains a depth sensor and a RGB camera. I previously used the intrinsic of the depth camera incorrectly, so the MC-Calib intrinsic calibration results should be correct. But when I use this extrinsic result to transform the point clouds, I got this: pointclouds.ply. Seems like the two point clouds are getting further.
The transformation may be inverted compared to the referential used in their toolbox. I think you should test with the inverse parameters.
For instance, transform the rotation matrix R as follows Rot = Rot.T
and Trans = -Rot.T@Trans
.
I hope it will work properly.
System information (version)
Vision system
configs.yml
######################################## Boards Parameters ################################################### number_x_square: 5 #number of squares in the X direction number_y_square: 5 #number of squares the Y direction resolution_x: 500 # horizontal resolution in pixel resolution_y: 500 # vertical resolution in pixel length_square: 0.04 # parameters on the marker (can be kept as it is) length_marker: 0.03 # parameters on the marker (can be kept as it is) number_board: 1 # number of boards used for calibration (for overlapping camera 1 is enough ...) boards_index: [] #leave it empty [] if the board index are ranging from zero to number_board
example of usage boards_index: [5,10] <-- only two board with index 5/10
square_size: 0.39 # size of each square of the board in cm/mm/whatever you want
############# Boards Parameters for different board size (leave empty if all boards have the same size) ################# number_x_square_per_board: [] number_y_square_per_board: [] square_size_per_board: []
######################################## Camera Parameters ################################################### distortion_model: 0 #0:Brown (perspective) // 1: Kannala (fisheye) distortion_per_camera : [] #specify the model per camera,
leave "distortion_per_camera" empty [] if they all follow the same model (make sure that the vector is as long as cameras nb)
number_camera: 2 # number of cameras in the rig to calibrate refine_corner: 1 # activate or deactivate the corner refinement min_perc_pts: 0.5 # min percentage of points visible to assume a good detection
cam_params_path: "../data/intrinsic.yml" # "../../Images_Plan/calibrated_cameras_data.yml" # file with cameras intrinsics to initialize the intrinsic, write "None" if no initialization available fix_intrinsic: 1 #if 1 then the intrinsic parameters will not be estimated nor refined (initial value needed)
######################################## Images Parameters ################################################### root_path: "../data/img/" #"../../Images_Sim1Cam3Board/" # "../../Images_NonOver3/" "../../Images_Cube/" "../../Images_Plan/" "../../Images_NonOver6Cam/" camprefix: "Cam"
######################################## Optimization Parameters ################################################### ransac_threshold: 3 #RANSAC threshold in pixel (keep it high just to remove strong outliers) number_iterations: 1000 #Max number of iterations for the non linear refinement
######################################## Hand-eye method ############################################# he_approach: 0 #0: bootstrapped he technique, 1: traditional he
######################################## Output Parameters ################################################### save_path: "../data/Result/" save_detection: 1 save_reprojection: 1 camera_params_file_name: ""
Describe the issue / bug
I want to use this tools to calibrate mutiple Azure Kinect DK and config files are shown above. When I run the program, I got this error:
And if I don't use the fixed intrinsic, just use this toolbox for both intrinsic and extrinsic calibration, I got a totally different result compare with the original kinect intrinsic:
calibrated_cameras_data.yml
I'm a newbie in cameras calibration, so please correct me if there are any mistakes.