Closed IjlalBaig closed 1 year ago
Generally speaking you would need 2D tracks to build point clouds, each 2D track corresponds to a 3D point. Such 2D tracks can be constructed using 2D matches, e.g., in pixSfM they use SP&SG to get 2D tracks.
With 2D tracks, you can simply conduct triangulation by pred_cameras, and achieve 3D points.
I would like to get point cloud prediction to compare results with colmap's output.
I observe that model inference yields a dictionary:
{"pred_cameras": pred_cameras, "z": z}
where
z
is a feature vector and NOT 3D point locations. Then how could I recreate a point cloud visualization such as from figure 6 in the paper?Thanks in advance!