google-research / multinerf

A Code Release for Mip-NeRF 360, Ref-NeRF, and RawNeRF
Apache License 2.0
3.62k stars 341 forks source link

Results of RefNeRF for shinyblender dataset like scene is not good #136

Closed prakashknaikade closed 2 months ago

prakashknaikade commented 1 year ago

I created a blender dataset like shinyblender dataset but without normals and depths. Sample image look like this, Trained this scene with refnerf using blender_refnerf.gin, but with Config.compute_normal_metrics = False as dataset doesn't have normals, batch_size: int = 4096, render_chunk_size: int = 4096, lr_init: float = 0.002, lr_final: float = 0.00002.

Result after 145k iterations is not so good, rendered object is not glossy/shiny and doesn't look like specular material at all.

prakashknaikade commented 1 year ago

@yzslab @jonbarron @bmild @gkouros Can you please give any input on this?

gkouros commented 1 year ago

Have a look here https://github.com/google-research/multinerf#oom-errors. It should solve your issue.

prakashknaikade commented 1 year ago

@gkouros Is it possible to resume training from past checkpoints? Also is there any other research available with lesser training time without compromising quality of the scene?

prakashknaikade commented 1 year ago

with correct hyper parameters I managed to get decent results: Config.compute_normal_metrics = False batch_size: int = 16384, render_chunk_size: int = 16384, lr_init: float = 0.002, lr_final: float = 0.00002

Result after 110k iterations,

Results are good with psnr of 33 but the training time is too much. Needs 40hrs of computational power of four 40gb gpus (approx).

Is there any other research available with lesser training time without compromising quality of the scene? @gkouros

prakashknaikade commented 1 year ago

Even after 250k iterations, with following hyper parameters, results are not that great: Config.compute_normal_metrics = False batch_size: int = 16384, render_chunk_size: int = 16384, lr_init: float = 0.002, lr_final: float = 0.00002

@yzslab @jonbarron @bmild @gkouros @dorverbin