Open aviralchharia opened 1 year ago
Hi.
Select one of your cameras as the primary camera. Call this camera0. Then calibrate between camera0 and camera1, and camera0 and camera2 etc. So you have the transformation between cameras like this:
camera0 <---> camera1 camera0 <---> camera2 camera0 <---> cameraN
Then you can use camera0 as your coordinate frame origin or determine coordinate transform from world origin to camera0. If you use camera0 as the origin, then you just set the rotation and translation as identity matrix and 0s vector.
If camera0 and camera2, cant see the same scene, you can calibrate like this: camera0 <---> camera1 <---> camera2 Then you can chain the transformations to obtain: camera0 <---> camera2. But this is not recommended because error will increase if you chain too many transformations.
Hi, I was trying 3D pose estimation using $3$ cameras: i.e., first finding 2D keypoints from each of the 03 camera frames and using cv2.triangulatePoints (to triangulate and get the 3D human pose). In such a case, how to get the camera extrinsic parameters?
Specifically, suppose for $3$ cameras, should we use $[C1, C2]$, $[C2, C3]$ pairs or should we use $[C1, C2]$, $[C1, C3]$ to find the extrinsic matrix of the cameras?