Closed wonjunior closed 11 months ago
Maybe, the ReLU in the MLP network for the roughness is dead (all input values are negative) so only a single value (offset in the last layer) is predicted. You may check whether this is the cause of this problem. You may reduce the learning rate or use LeakyReLU to remedy this problem.
Thank you for your prompt reply!
As far as I understand though, the activation used on the roughness predictor is a sigmoid (rescaled to [0.04²,1]
cf. field.py). I am aware you use a regularization on the roughness and metalness to avoid saturating to 0.04² or 1. When I keep those loss terms however, I saturate to some values around 0.4...
I appreciate your feedback.
Maybe, some previous ReLUs are dead so the output would be the same for all coordinate. https://github.com/liuyuan-pal/NeRO/blob/3b4d421a646097e7d59557c5ea24f4281ab38ef1/network/field.py#L330
That makes sense, thank you for the help!
I had a question regarding the MC importance sampling. I've tried applying your method to a dielectric, highly glossy object. The first stage works well, but less so on stage 2. While debugging, I trained the BRDF MLP and the albedo/roughness MLPs, keeping the metal map at 0. I used MC importance sampling with the ground truth (gt) envmap. It appears that the roughness BRDF gets stuck, and after a few iterations, the MLP predicts a single value (around 0.4) for all points in the scene. Any idea why this might be happening? It seems estimating the roughness parameter is tricky, even with the provided gt environment map. I appreciate any leads, thank you for your great work.