Closed StefanoS90 closed 2 years ago
@StefanoS90 With the latest commit, we reverted to the old data provider, so each training step only sample from a single image.
Thanks for letting me know. I have a question related to this actually, is there a reason to go back to only load a single image? Didn't you notice an increase in performance by mixing rays of different images in the same batch?
Thanks
Yeah, but the improvement is not significant, and requires 3x more CPU memory (e.g., can reach 10G+) to load the dataset, which is not very cost-effective.
closed for now.
Hi! Thanks a lot for the great work.
I am trying to adapt this nerf implementation to the nerf in the wild idea where a specific embedding is learnt for every training image. In order to do this, during the rendering process, i need to associate each xyzs point fed to the mlp to its oroginal image id.
With the no-cuda rendering this is very easy because there is a constant sampling over the ray, but with the cuda ray marching i do not know how to keep track of this.
In particular, looking a this line:
https://github.com/ashawkey/torch-ngp/blob/b5799e90dca4e188b14f8c77abf0d420c0bac915/nerf/renderer.py#L240
Is there a way to know for each of the xyz point to which ray it belongs?
Thanks!