colmap / pycolmap

Python bindings for COLMAP
BSD 3-Clause "New" or "Revised" License
915 stars 128 forks source link

Points 3d #1

Closed Zumbalamambo closed 4 years ago

Zumbalamambo commented 4 years ago

Thank you for the python binding :)

I have the keypoints of the two images that I have matched and also the pose information of one of the images. How do I obtain point3d for the absolute pose estimation?

mihaidusmanu commented 4 years ago

You will need a way to lift the 2D points from the image with pose to 3D - depth or a sparse 3D model for instance.

In this repository, we suppose you already have access to 2D<->3D correspondences for absolute pose estimation so building the sparse model is out of scope.

Zumbalamambo commented 4 years ago

@mihaidusmanu I have not managed to retrieve the 3D points from the 2D points using the pose information of the image. Could you please suggest me a method that I can use to retrieve the 3D points from a 2D image with a known pose?

mihaidusmanu commented 4 years ago

Given an image with camera pose (world-to-image) P and intrinsics matrix K, you can back-project a pixel location (x, y) to 3D by computing P^{-1} @ K^{-1} @ [x, y, depth(x, y)] where @ denotes matrix multiplication.

As I mentioned above, in order to do this, you will need the depth of pixel x, y. This can be obtain for instance using an RGB-D camera.

If you do not have access to an RGB-D camera, you might want to look into Structure-from-Motion - i.e., building a 3D model from a sequence of images.

Zumbalamambo commented 4 years ago

@mihaidusmanu thank you! I have managed to extract the 3D points using the depth image.

import cv2

# camera matrix
mtx = [8.6767822265625000e+02, 0., 4.8268579101562500e+02, 0.,
       8.6760119628906250e+02, 2.7117459106445312e+02, 0., 0., 1.]

# parse calib matrix
fx = mtx[0]
cx = mtx[2]

fy = mtx[4]
cy = mtx[5]

depth_img = cv2.imread("depth.png", cv2.IMREAD_ANYDEPTH)
depth_row, depth_column = depth_img.shape

# to store 3d points
pts3d = []

for i in range(depth_row):
    for j in range(depth_column):

        # retrieve depth value
        depth = depth_img[i][j]
        x, y = i, j

        if depth > 0.0:
            x3D = (x - cx) * depth / fx
            y3D = (y - cy) * depth / fy
            z3D = depth

            # populate 3d points array
            pts3d.append([x3D, y3D, z3D])
        else:
            pts3d.append([-1, -1, -1])

I have key points from SIFT. Any idea on "points2D" which is the first parameter?

mihaidusmanu commented 4 years ago

In this case, points2D would be the 2D keypoints in a different image corresponding to the 3D points lifted in the current image. I am unsure what you are trying to achieve. Maybe if you explain that, I could be of more help. What is your current evaluation scenario? Do you want to get the relative pose between two images or do you want to get the absolute pose of an image given a pointcloud?

Zumbalamambo commented 4 years ago

I would like to obtain the absolute pose of an image given a point cloud. I'm still having a hard time trying to get the perfect absolute position :( Is there any example code?

mihaidusmanu commented 4 years ago

Do you have descriptors for both the point-cloud and the local features in the image? If so, then all you need to implement yourself is matching between the descriptors. Next you will have to give the 2D points and their 3D point correspondences to pycolmap.absolute_pose_estimation as shown in the README and that will return you the pose.

Zumbalamambo commented 4 years ago

I have implemented the first portion so I have points2D from the image and points3D from the depth image.

In terms of params, I have used the values of the image corresponding to the points2D. (fx,cx,cy) values.

The resultant absolute pose has a pose estimation error of about 50-70cm. am I doing it properly?

bgramaje commented 1 year ago

hey can you provide the solution you implemented? I am having the same trouble as you