mbanani / unsupervisedRR

[CVPR 2021 - Oral] UnsupervisedR&R: Unsupervised Point Cloud Registration via Differentiable Rendering
https://mbanani.github.io/unsupervisedrr
MIT License
137 stars 20 forks source link

About scannet #3

Closed phdymz closed 2 years ago

phdymz commented 2 years ago

Thank you for your sharing codes. I have downloaded scannet, but found that its intrinsic include intrinsic_depth and intrinsic_color. However, I find the intrinsic for depth is replaced with the intrinsic for color in code 'make_scannet_dict' . Is it okay to do this?

ThakurSarveshGit commented 2 years ago

Hi,

I also downloaded the scannet dataset and it is completely in a different structure format. Did you structure it according to the Dataset Readme for UnsupervisedRR? Were you able to run inference on scannet dataset?

mbanani commented 2 years ago

Hey, sorry I missed the issue.

@phdymz Please check issue #2 regarding the intrinsics. To summarize; The difference between the intrinsics is pretty small after preprocessing, so I suspect it wouldn't make a difference, but I didn't properly inspect this.

@ThakurSarveshGit I am curious, what format is it coming in? It should be pretty easy to adapt the dictionary creating code to account for a different structure and define the paths there.

I would also recommend checking out the follow-up to this paper: BYOC. While BYOC is concerned with learning geometric features, I found that using just the visual stream resulted in better performance than UnsupervisedRR, and that you can completely rely on the correspondence loss and not use the rendering pipeline, which made the whole pipeline much simpler. Furthermore, the code base used PyTorch lightning and hydra configs which I found easier to work with and extend for other experiments. Let me know if you have any other questions.