Open YZsZY opened 1 year ago
Hi! Theoretically, FullyFusedMLP should be faster than VanillaMLP, so we use it for NeRF (although we do not observe significant speedup lol). But for NeuS, computing the analytic normal requires computing second-order derivatives, which is not supported by FullyFusedMLP, so we use the vanilla torch MLP. Note that if you use finite difference to compute the normal like here, you should be able to use FullyFusedMLP, but finite differencing seems to be slower and less effective.
Get that! Thanks a lot~
Hello author! Thanks for your great work! Here I want to ask a question. Why NeRF corresponding Feature MLP with FullyFusedMLP and Neus with VanillaMLP, is there anything to this design?