Closed prakashknaikade closed 2 months ago
@yzslab @jonbarron @bmild @gkouros Can you please give any input on this?
Have a look here https://github.com/google-research/multinerf#oom-errors. It should solve your issue.
@gkouros Is it possible to resume training from past checkpoints? Also is there any other research available with lesser training time without compromising quality of the scene?
with correct hyper parameters I managed to get decent results: Config.compute_normal_metrics = False batch_size: int = 16384, render_chunk_size: int = 16384, lr_init: float = 0.002, lr_final: float = 0.00002
Result after 110k iterations,
Results are good with psnr of 33 but the training time is too much. Needs 40hrs of computational power of four 40gb gpus (approx).
Is there any other research available with lesser training time without compromising quality of the scene? @gkouros
Even after 250k iterations, with following hyper parameters, results are not that great: Config.compute_normal_metrics = False batch_size: int = 16384, render_chunk_size: int = 16384, lr_init: float = 0.002, lr_final: float = 0.00002
@yzslab @jonbarron @bmild @gkouros @dorverbin
I created a blender dataset like shinyblender dataset but without normals and depths. Sample image look like this, Trained this scene with refnerf using blender_refnerf.gin, but with Config.compute_normal_metrics = False as dataset doesn't have normals, batch_size: int = 4096, render_chunk_size: int = 4096, lr_init: float = 0.002, lr_final: float = 0.00002.
Result after 145k iterations is not so good, rendered object is not glossy/shiny and doesn't look like specular material at all.