Closed SuruchiPokhrel closed 1 year ago
I tried the same for my depth input from Kinect camera, but did not really see any pointclouds during reconstruction, and also the camera trajectory seems really off. What is the source of your depth input?
try this:
depth = depth.astype(np.float32) / 5000.0
depth = torch.from_numpy(depth).float()[None,None]
depth = F.interpolate(depth, (h1, w1), mode='nearest').squeeze()
depth = depth[:h1-h1%8, :w1-w1%8]
I have been using demo.py with custom dataset and the results are sparse. DROID-SLAM is supposed to take stereo and rgbd as input, but i dont see any way supplying rgbd input for demo.py . Is there any way of doing so ?