Closed bennyguo closed 2 years ago
Thanks!
The gradient scaling makes the hash grid encoder train to a sharper result in the first pass, most notably on models with high frequency textures. It is not a crucial optimization however, as the second pass, when we switch to standard 2D textures, usually increases the sharpness regardless.
We use only the encoder from tcnn for compatibility reasons. The cutlass MLP:s were not compatible with some older servers we used in the development, so with a vanilla PyTorch MLP, we could deploy the code on more machines. It is very easy to switch back to the tcnn MLP (or preferably, the combined encoder+MLP from tcnn) if you want.
Thanks for the reply! Closing the issue now.
Hello,
Thanks for releasing the code for this amazing work!
I've been going through the code and find some implementations in TextureMLP3D confusing: 1) What's the meaning of scaling the gradient? I set
gradient_scaling=1.0
and did not find much difference in the output. 2) Why not use the tcnn MLP but only the tcnn Encoding?Thanks!