utiasSTARS / pykitti

Python tools for working with KITTI data.
MIT License
1.15k stars 239 forks source link

Estimating pose and warping gives incorrect results #61

Closed NagabhushanSN95 closed 3 years ago

NagabhushanSN95 commented 3 years ago

Hi, I'm trying to warp frame1 to the view of frame2. When we do that, except moving objects, all other static objects should overlap perfectly. I'm estimating depth using OpenCV from stereo images. Incorrect depth estimation can lead to some errors but overall, the warped image should look similar to frame2.

I'm using this code to estimate pose of frame1 and frame2. When I warp using these, the warped_image is completely black. When I looked at the relative transformation matrix, translation values were too huge. Any idea how I can compute the relative transformation between frame1 and frame2?

I tried reducing translation values by 10000. In this case, the warped image is not completely black. There iss some transformation. However, the transformation is completely different than what is observed, which seems to suggest that the rotation matrix may also be incorrect. Or am I using them incorrectly?

brucemuller commented 3 years ago

@NagabhushanSN95 Hi, not sure I can help with the depth warping but it worked okay using a homography for the road surface.

To estimate relative pose between frames I'm using:

   # Transformation from world coordinates to camera 2 (colour) in frame i
    T_i = torch.matmul(torch.FloatTensor(self.dataset.calib.T_cam2_imu),torch.inverse(torch.FloatTensor(self.dataset.oxts[frame_cami].T_w_imu)))

    # Extract rotation and translation from world to camera frame i
    R_i = torch.FloatTensor(T_i)[0:3,0:3]
    t_i = torch.FloatTensor(T_i)[0:3,3].unsqueeze(-1)

    # Transformation from world coordinates to camera 2 (colour) in frame j
    T_j = torch.matmul(torch.FloatTensor(self.dataset.calib.T_cam2_imu),torch.inverse(torch.FloatTensor(self.dataset.oxts[frame_camj].T_w_imu)))

    # Extract rotation and translation from world to camera frame i
    R_j = torch.FloatTensor(T_j)[0:3,0:3]
    t_j = torch.FloatTensor(T_j)[0:3,3].unsqueeze(-1)

    # Relative pose: transformation from frame i to j coordinate systems
    R_itoj = torch.matmul(R_j,tp(R_i,-2,-1))
    t_itoj = t_j - torch.matmul(R_itoj,t_i)

Did you use something similar?

NagabhushanSN95 commented 3 years ago

Hi @brucemuller thanks for the reply. Since it has been a while, I don't exactly remember what I did. I was trying a bunch of things. From what I remember, I don't think I did such an elaborate procedure.

I'll close this issue for now. I'll try it sometime later. If it doesn't work out, I'll reopen the issue. If you can share your homography code and it's results, it would help me debug the issues, if I face any.