Closed felipesce closed 2 years ago
Same issue here
Try this :
depth = torch.as_tensor(depth)
depth = F.interpolate(depth[None,None], (h1, w1)).squeeze()
depth = depth[:h1-h1%8, :w1-w1%8]
yield t, image[None], depth, intrinsics
Thank you! May the gods bless you with a happy and fulfilling life!
Try this :
depth = torch.as_tensor(depth) depth = F.interpolate(depth[None,None], (h1, w1)).squeeze() depth = depth[:h1-h1%8, :w1-w1%8] yield t, image[None], depth, intrinsics
I did the exact same thing but during the reconstruction process empty point clouds are generated. Do i need to make changes in other parts of the source code?
Hey there!
I'm trying to use DROIDSLAM with RGBD input.
I noticed that the track function in droid.py accepts depth as an input, but no info on the correct format, or if something more must be done.
This is my new image_stream function: `def image_stream(imagedir, depthdir, calib, stride): calib = np.loadtxt(calib, delimiter=" ") fx, fy, cx, cy = calib[:4]
This is my current error:
RuntimeError: expand(torch.FloatTensor{[0, 41, 584]}, size=[41, 73]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)
Any help on this matter would be greatly appreciated, thanks for this amazing software.