Closed chudur-budur closed 2 years ago
There is no code available for generating the depth maps, we used a custom renderer for that. We extracted a scene mesh from the TSDF volumes that come with the 7Scenes dataset (using ParaView) and rendered these using the ground truth poses.
Alternatively, we can also use the original depth maps of 7Scenes and register them to the RGB images, see eg. https://projet.liris.cnrs.fr/voir/activities-dataset/kinect-calibration.html
Generating camera coordinates from the depth maps is easy. Refer to the generation of scene coordinates in dataset.py and omit the transformation with the camera pose.
Best, Eric
Can we get calibrated test depth images for 7scenes dataset. There are only calibrated depth maps for the training sequences in the given link.
Thanks for sharing your work.
The instruction says to download generated depth maps and precomputed camera coordinate files.
Did you generated those from the 7scenes sequence images?
If yes, how did you compute/generated?
Is there any code for that?