Closed ootts closed 2 years ago
Hi @f-sky, we tried to follow the original TensorFlow implementation as closely as possible, which uses the Xavier initialization by default, and empirically we found it to work slightly better than Kaiming init (PyTorch default). I don't have a good answer to why one's better than the other; I'm guessing neural representations could be sensitive to initialization strategies. That said, it's weird that you didn't see convergence. Were you running BARF or the original NeRF?
Closing due to inactivity, please feel free to reopen if necessary!
I tried to use the built-in initialization (kaiming init) and the network cannot converge. Why is the xavier init necessary?