Open TsingLoo opened 1 year ago
It seems that this line goes wrong. https://github.com/hou-yz/MultiviewX/blob/644b90aa13585bd28d730d32d4d0411a792d2063/calibrateCamera.py#L32
As the OpenCV document mentioned, the objectPoints should be a vector of vectors of calibration pattern points in the calibration pattern coordinate space(in different views). However, the cv2.calibrateCamera here only takes corresponding 2D-3D points from the same one view, which may be not enough for further calibration. Besides, I don't know why the size of the image is h*w, while the input frame is actually w*h.
I get the usable calibration by CalibrationTool
Your research and project inspires me a lot. I have synthesized new frames and points data from my Unity project. However, when it comes to the calibration stage, something goes wrong.
Since I setup the scene with six cameras from the same prefab at different locations and orientations, the intrinsic of each camera should be the same and extrinsic may vary from each other. However, by running the 'calibrateCamera.py' script, only some of the cameras get calibrated correctlly.
9.3530737478068772e+02 0. 9.5999987323376331e+02 0. 9.3530737395304050e+02 5.3999992195891150e+02 0. 0. 1.</data></camera_matrix>
The fx, fy, cx, cy of the wrong calibrations may suffer a dicrapency from the correct one.
<data> 6.2205436588916064e+02 0. 7.6815935516113848e+02 0. 6.0663721630513464e+02 8.1161463688244953e+02 0. 0. 1.</data></camera_matrix>
I follow the structure of the demo data in 'matchings' folder, data is available here: matchings.zip
I want to know whether there is something required to be noticed when running the 'calibrateCamera.py' script