kakaobrain / nerf-factory

An awesome PyTorch NeRF library
https://kakaobrain.github.io/NeRF-Factory
Apache License 2.0
1.27k stars 107 forks source link

A question about the epoch_size and max_iter_step when training the mipnerf360 #19

Closed YZsZY closed 1 year ago

YZsZY commented 1 year ago

Hello author, thank you for your great work! I recently tried to make some improvements based on the mipnerf360 code you reproduced, and then I ran into some confusion and wanted.

I noticed that the mipnerf360 paper mentions training 250k iterations, but I noticed that you set 250k epochs in the code, this brings to a question that I would like to ask.

Pytorch-lightning defaults to completing an epoch of training when all the data has been loaded once, so often an epoch corresponds to hundreds or thousands of steps already completed. So is there something wrong with setting it to 250k epochs? Perhaps it would be more appropriate to set max_step to 250k in pytorch-lightning https://github.com/kakaobrain/NeRF-Factory/blob/ac10296a39d8e5f1940b590ca18e3689e17eadf4/configs/mipnerf360/360_v2.gin#L11 https://github.com/kakaobrain/NeRF-Factory/blob/ac10296a39d8e5f1940b590ca18e3689e17eadf4/configs/mipnerf360/360_v2.gin#L18

I may not have understood correctly so that hope you can point out the problem, and I apologize for disturbing you. Looking forward to your reply!

YZsZY commented 1 year ago

Oh, I seem to have discovered how to solve this problem by using sampler to make each epoch 250K iterations when loading data, sorry for disturbing you!