Open NagabhushanSN95 opened 3 weeks ago
Update: I took 8 consecutive frames from each of the three videos to get 30 frames. Assuming that the object motion between is not too much, I ran colmap on them. The relative rotation matrices I got is matching, but the translation isn't. It is not a simple scale factor either. The translation is very different. An example below
Relative extrinsics between 0_00008
and 1_00008
obtained from colmap.
array([[ 0.8271133 , 0.41969466, -0.37381811, 4.28686663],
[-0.52556144, 0.81325596, -0.24979976, -1.12470238],
[ 0.19917018, 0.40307709, 0.89323015, 4.24574654],
[ 0. , 0. , 0. , 1. ]])
Relative extrinsics between the same frames obtained from the dataset.
array([[ 0.82652076, 0.42485099, -0.36927638, -0.17995098],
[-0.52475049, 0.8189421 , -0.23231606, 0.08859444],
[ 0.20371627, 0.38579203, 0.8998134 , 0.07466149],
[ 0. , 0. , 0. , 1. ]])
From the existing documentation, it appears that the camera extrinsics are in OpenCV convention
(x, -y, -z)
or(right, down, into the scene)
and are in world-to-camera (w2c
) format. Is this right?I tried warping a frame from the apple scene to the other viewpoint using the depth given. Can you help me with the details? Should the depth be scaled?
The warped frame is not matching the second frame. I used this code to warp the frame.
I read the data as follows:
The frames look like below.