kxhit / vMAP

[CVPR 2023] vMAP: Vectorised Object Mapping for Neural Field SLAM
https://kxhit.github.io/vMAP
Other
325 stars 20 forks source link

Novel view synthesis #27

Closed BIT-DYN closed 3 months ago

BIT-DYN commented 3 months ago

Thanks for such an excellent job!

I have a question regarding synthesizing a novel view, which involves both RGB and depth data. Specifically, I'm curious about how to accomplish this based on a given camera position after completing the training of all the data. Unfortunately, I couldn't find the relevant code in this repo. Could you please provide some guidance on this matter?

Thank you in advance for your help!

kxhit commented 3 months ago

Hi @BIT-DYN, thanks for your interest!

Yeah, it is similar to the issue https://github.com/kxhit/vMAP/issues/19#issuecomment-1637897866

Basically, given a pose, you can loop over each object, do the NeRF rendering, and combine them sorted by the depth z-buffer to form the novel view of the scene. To do each object's NeRF rendering, you can do an AABB ray-box test to get the rays that interact with the object bbox, and render the rays to pixels (RGB and depth), then you will get a novel view RGB and a novel view depth image for the object.

Thank you!

BIT-DYN commented 3 months ago

Hi, @kxhit ,Thank you for your quick reply!

Can you please provide this part of the code as I am having some problems running it.

Thank you!