Closed ruili3 closed 1 year ago
Hi, sorry for the late reply!
Yes, I have found that having a small MLP is essential to having stable trainings. The encoder-decoder receives the training signal only through the MLP. With a larger MLP, the signal becomes more noisy. I believe that this is one of the key reasons why PixelNeRF does not generalize very well.
Hi Brummi,
Thanks a lot for the awesome code! I noticed you use a quite small MLP to render the density field, e.g.,
ResnetFC
with small hidden dimension channels (64
) and without any ResNet blocks (0
).I wonder if a larger MLP with 1) larger dimensions; 2) more layers will lead to suboptimal results according to your previous experiments.
Thanks a lot for the information :)