Closed salykova closed 2 months ago
Thanks for the finds out!
I noticed that you use #L135 only 15 output values of density mlp (mlp_base) as input to color mlp (mlp_head), whereas in original ngp all 16 outputs are used. May I ask you what was the reason of not using the first output value? Maybe I misunderstood/missed something?
I didn't put too much efforts on aligning the setting with NGP paper so I think I missed this design. Will try.
In original ngp there is no activation function after density mlp (mlp_base), whereas you apply exp() function on density
I think the NGP uses a relu ac activation (please correct me if I'm wrong). The exp
activation is something that we found to be more helpful.
Hi @liruilong940607,
The NGP uses relu activation only on hidden layers and on output of color network (=mlp_head
in your case), but there is no activation function for the density network output (=output of mlp_base
in your case). So output of mlp_base
should be directly feeded into mlp_head
without activation. It was discussed here https://github.com/NVlabs/instant-ngp/discussions/167. These are just minor details, I don't think that it will significantly increase/decrease the performance.
Thanks for the correction! Yeah I also feel it should not affect too much. But great finding!
Dear nerfacc developers,
I noticed that you use #L135 only 15 output values of density mlp (mlp_base) as input to color mlp (mlp_head), whereas in original ngp all 16 outputs are used. May I ask you what was the reason of not using the first output value? Maybe I misunderstood/missed something?
In original ngp there is no activation function after density mlp (mlp_base), whereas you apply exp() function on density
From ngp paper:
P.S. I checked github issues, but this wasn't discussed yet. It would be very helpful if you could clarify. Thanks in advance!