ashawkey / torch-ngp

A pytorch CUDA extension implementation of instant-ngp (sdf and nerf), with a GUI.
MIT License
2.07k stars 272 forks source link

Cuda raymarching #34

Closed StefanoS90 closed 2 years ago

StefanoS90 commented 2 years ago

Hi! Thanks a lot for the great work.

I am trying to adapt this nerf implementation to the nerf in the wild idea where a specific embedding is learnt for every training image. In order to do this, during the rendering process, i need to associate each xyzs point fed to the mlp to its oroginal image id.

With the no-cuda rendering this is very easy because there is a constant sampling over the ray, but with the cuda ray marching i do not know how to keep track of this.

In particular, looking a this line:

https://github.com/ashawkey/torch-ngp/blob/b5799e90dca4e188b14f8c77abf0d420c0bac915/nerf/renderer.py#L240

Is there a way to know for each of the xyz point to which ray it belongs?

Thanks!

ashawkey commented 2 years ago

Hi, Yes, the rays here records the points in each ray (see here). However, in the latest commit I haven't recored each ray's image id. Maybe you can try with this commit, where we only sample rays from one image at each training step.

ashawkey commented 2 years ago

@StefanoS90 With the latest commit, we reverted to the old data provider, so each training step only sample from a single image.

StefanoS90 commented 2 years ago

Thanks for letting me know. I have a question related to this actually, is there a reason to go back to only load a single image? Didn't you notice an increase in performance by mixing rays of different images in the same batch?

Thanks

ashawkey commented 2 years ago

Yeah, but the improvement is not significant, and requires 3x more CPU memory (e.g., can reach 10G+) to load the dataset, which is not very cost-effective.

ashawkey commented 2 years ago

closed for now.