barbararoessle / dense_depth_priors_nerf

Dense Depth Priors for Neural Radiance Fields from Sparse Input Views
MIT License
389 stars 49 forks source link

About the data after processing scannet with colmap #18

Open PruneTruong opened 2 years ago

PruneTruong commented 2 years ago

Hi,

Thanks for your nice work! I have a question regarding the depth/poses obtained in the transform.json files. Is the ground-truth depth consistent with the camera to world pose given? i.e. do I need to scale them or can I directly use them to obtain correspondences for example?

Thanks a lot!

barbararoessle commented 2 years ago

Hi, the target depth maps are from ScanNet. We run SfM on the RGB images to get camera poses and a sparse reconstruction. Using very few input images per room, these SfM sparse reconstructions are very sparse and noisy. We use the sparse depth maps rendered from this reconstruction to scale the camera transformations, so that the sparse depth aligns with the ScanNet target depth. So, the target depth is consistent with the camera transformations given, to the degree that can be achieved with the quality of the SfM reconstructions.

PruneTruong commented 2 years ago

I see, thanks. Could you share the code to you used to generate the image splits on scanNet? This is If i wanted to evaluate on more scenes. Thanks a lot.