mapillary / OpenSfM

Open source Structure-from-Motion pipeline
https://www.opensfm.org/
BSD 2-Clause "Simplified" License
3.4k stars 858 forks source link

Using camera poses in OpenCV #442

Closed simoneBrancati closed 5 years ago

simoneBrancati commented 5 years ago

Hi, i'm trying to use rotation and translation from the shots found in reconstruction.json in OpenCV to render a 3D object in the same pose of the image. The render function is this:

def render(image, objFile, rvecs, tvecs, cameraMatrix, distortionCoeffs, scale = 3):
    #read vertices from objFile
    vertices = objFile.vertices
    #scale model manually
    scale_matrix = np.eye(3) * scale
    #iterate through obj model faces
    for face in objFile.faces:
        face_vertices = face[0]   
        points = np.array([vertices[vertex - 1] for vertex in face_vertices])
        #scale model
        points = np.dot(points, scale_matrix)
        points = np.array([[p[0], p[1], p[2]]  for p in points])
        #project 3D points on image plane
        dst, jacobian = cv2.projectPoints(points,rvecs, tvecs, cameraMatrix, distortionCoeffs)
        imgpts = np.int32(dst)
        #render model
        cv2.fillConvexPoly(image, imgpts, (255, 37, 12))

    return image

When i manually pass the rotation vector and the translation vector (rvecs and tvecs) from reconstruction.json the 3D object is projected way off the borders of the frame, as if the values of this vectors were too big.

Am i missing some coordinate conversion or some other transformation?

paulinus commented 5 years ago

The rvec and tvec are in the same format that opencv's project points expect them. The problem might come from the cameraMatrix. You can use this method to get the camera matrix in OpenCV's format from an OpenSfM camera object.

pmoulon commented 5 years ago

@paulinus Noticed a typo in the help of the method you have shown. versior -> version

simoneBrancati commented 5 years ago

Thanks, it was the correct solution