vislearn / dsacstar

DSAC* for Visual Camera Re-Localization (RGB or RGB-D)
BSD 3-Clause "New" or "Revised" License
235 stars 36 forks source link

How did you generate depth maps and precomputed camera coordinate files? #6

Closed chudur-budur closed 2 years ago

chudur-budur commented 3 years ago

The instruction says to download generated depth maps and precomputed camera coordinate files.

Did you generated those from the 7scenes sequence images?

If yes, how did you compute/generated?

Is there any code for that?

ebrach commented 3 years ago

There is no code available for generating the depth maps, we used a custom renderer for that. We extracted a scene mesh from the TSDF volumes that come with the 7Scenes dataset (using ParaView) and rendered these using the ground truth poses.

Alternatively, we can also use the original depth maps of 7Scenes and register them to the RGB images, see eg. https://projet.liris.cnrs.fr/voir/activities-dataset/kinect-calibration.html

Generating camera coordinates from the depth maps is easy. Refer to the generation of scene coordinates in dataset.py and omit the transformation with the camera pose.

Best, Eric

mhmtsarigul commented 2 years ago

Can we get calibrated test depth images for 7scenes dataset. There are only calibrated depth maps for the training sequences in the given link.

Thanks for sharing your work.