Open PruneTruong opened 2 years ago
Hi, the target depth maps are from ScanNet. We run SfM on the RGB images to get camera poses and a sparse reconstruction. Using very few input images per room, these SfM sparse reconstructions are very sparse and noisy. We use the sparse depth maps rendered from this reconstruction to scale the camera transformations, so that the sparse depth aligns with the ScanNet target depth. So, the target depth is consistent with the camera transformations given, to the degree that can be achieved with the quality of the SfM reconstructions.
I see, thanks. Could you share the code to you used to generate the image splits on scanNet? This is If i wanted to evaluate on more scenes. Thanks a lot.
Hi,
Thanks for your nice work! I have a question regarding the depth/poses obtained in the transform.json files. Is the ground-truth depth consistent with the camera to world pose given? i.e. do I need to scale them or can I directly use them to obtain correspondences for example?
Thanks a lot!