yenchenlin / nerf-pytorch

A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.
MIT License
5.47k stars 1.06k forks source link

how to train my own data for 360 view rendering #57

Closed Holmes-Alan closed 2 years ago

Holmes-Alan commented 2 years ago

Thank you for your code. Could you please introduce how to use your code to train customized data with images captured at 360 views?

gkouros commented 2 years ago

@Holmes-Alan Did you solve your problem? If so can you give some insights? I've already tried to train with my own 360 scene, but with nowhere near good results. The Lego scene is a 360 scene but its configuration didn't work for me. I also tried to train with the param "--spherify" or with the config of llff scenes, but no luck there either.

@yenchenlin Your input would be really appreciated.

gkouros commented 2 years ago

In the original nerf repo, they suggest the following For a spherically captured 360 scene, we recomment adding the --no_ndc --spherify --lindisp flags., so I'll give it a try and report back for anyone having a similar issue.

gkouros commented 2 years ago

To give an update on my last comment, I trained on a scene I captured myself and I used the flags "--no_ndc --spherify --lindisp" which seem to work better for 360 scenes, although, the quality of the rendering is still not satisfactory, but I think that could be due to suboptimal poses or high scene complexity.

jdiazram commented 1 year ago

Hi guys. Question...what happens if I have video instead of images? I suppose I have to create the images from any software that separates images from a video and then use imgs2poses.py to suggest the author from here? Is that right? Thx