-
Hello,
Congratulations on this research; I have been having a very good time in experimenting with this pipeline.
However, I have a question.
I loaded a mesh and created six depth maps to align i…
-
When I use k4aViewer, I find that the depth map is in color .
![QQ截图20211023151858](https://user-images.githubusercontent.com/74224500/138546875-629a81ab-579c-4d8b-83f9-01a90e9d61ab.jpg)
I wonder i…
-
So when I use the command line I get a depth map that is only 8 bit integer precision thus using it on a 2D plane for deformation in Blender I get stair stepping, for example:
![image](https://github…
-
Thank you for the excellent work and sharing the dataset!
I noticed that the depth in the dataset is the PNG visualization image. Could you please tell me how to render the ground truth depth map? …
-
Hello! I would like to ask how the results of NYUv2 can be converted into images? And how do you train the data of NYUv2? This is clearly an image rather than a point cloud. Looking forward to your re…
-
Some additional local installation tips would be appreciated. I can't get your project running.
It seems this checkpoint is needed:
https://pan.baidu.com/s/1n6FlqrOTZqHX-F6OhcvNyA?pwd=g2cm
But …
-
Hi, thank you very much for open-sourcing this project!
Would it be possible to share the ScanNet splits used in the paper (training and test), as well as the calculated COLMAP poses and sparse d…
-
### Specifications like the version of the project, operating system, and hardware
Running master branch
### Steps to reproduce the problem
Looking at the code we found this weird behavior after …
-
Hello, I replicated your method on the uHumans2 dataset and it worked very well. But when trying to use vlmaps in the real environment, the mapping effect is very poor. I used ZED2i and called its SDK…
-
Thank you for your contribution
I saw in your metric depth description that the output of the pre trained model can be used as a disparity map. Now, I want to use a custom dataset that includes RGB i…