kwea123 / nerf_pl

NeRF (Neural Radiance Fields) and NeRF in the Wild using pytorch-lightning
https://www.youtube.com/playlist?list=PLDV2CyUo4q-K02pNEyDr7DYpTQuka3mbV
MIT License
2.74k stars 483 forks source link

Convergence in more scenes #85

Closed tau-yihouxiang closed 3 years ago

tau-yihouxiang commented 3 years ago

Several scenes, like drums, mic, and ficus cannot convergence. I think the reason might be the small area of the foreground. Could you validate the observation? Thank you!

kwea123 commented 3 years ago

I see your other comment on the softplus activation. Unfortunately I didn't test these scenes so I cannot validate the observation... but to my knowledge many other people have this problem, you can see the original repo's issue for example.

I don't know what exactly can make the training stable on these scenes, although the latest mip-nerf paper says softplus makes training stable across all scenes already. Maybe you still need center crop or other optimizers.

tau-yihouxiang commented 3 years ago

@kwea123 I found two methods to stable the training in these scenes: 1. directly reduce the learning rate from 5e-4 to 1e-4; 2. change the optimizer from adam to radam, then reduce the optimizer to 2e-4.

In [JAX-NeRF] implementation (https://github.com/google-research/google-research/blob/c82f9aff12dc992e90f4a4fbc732f0a5e9bb507c/jaxnerf/train.py#L98), grad value clip modules are applied. Thus I think the reason might be the large grad value during training.

resurgo97 commented 2 years ago

@kwea123 Try warmup scheduler with Adam optimizer (as suggested in Mip-NeRF). It will definitely guarantee convergence. When you decrease the learning rate instead, you will obtain lower PSNR at the end of the training. (~ 2 PSNR)