I am using your pretrained model on the image from OmniCam dataset, the input and output inverse depth are here.
However, when I want to try to get the point cloud using 'reconstruct()' function in camera_generic.py as:
def reconstruct(depth, frame='c'):
"""
Reconstructs pixel-wise 3D points from a depth map: P(x, y) = s(x, y) + d(x, y) * r(x, y)
Parameters
----------
depth : torch.Tensor [B,1,H,W]
Depth map for the camera
frame : 'w'
Reference frame: 'c' for camera and 'w' for world
Returns
-------
points : torch.tensor [B,3,H,W]
Pixel-wise 3D points
"""
H, W = depth.shape
Xc = Rmat * (1.0 / depth) #1.0 / depth to convert inverse depth to depth
# If in camera frame of reference
if frame == 'c':
return Xc
Hi,
I am using your pretrained model on the image from OmniCam dataset, the input and output inverse depth are here. However, when I want to try to get the point cloud using 'reconstruct()' function in camera_generic.py as:
and
The output point cloud is wrong like this, where could be the problem? Thanks!!!