Open alvaro-budria opened 1 year ago
For those having a similar problem in the future, I found my error: I was sampling the mesh just once. Increasing the number of samples helped, but nonetheless resulted in a noisy surface. I switched to resampling the mesh at each iteration, and now I get a good result, as expected.
I found in this project torch-ngp that they resample at each epoch. However, I looked at the code from instant-NGP (this repo) and it seems the mesh is sampled just once (function load_training_data
from testbed.cu
). Could anyone confirm or correct me, please?
Hello, I am currently trying to use the hash grid from
tinycudann
plus a pure PyTorch MLP to fit an SDF directly from(3D point - distance)
supervision, just like in the example from the I-NGP paper.I used the following command to fit an SDF on a SMPL human model with I-NGP's implementation:
The results are very good, especially taking into consideration that only 2000 iterations are performed.
As for my implementation, I have put together a simple system that only contains a training loop, a
tinycudann
hashgrid, and a PyTorch MLP, which predicts the distance for a given point. As for the sampling, I am doing 4/8 on the surface, 3/8 around it, and 1/8 randomly within the AABB.The results are not quite as good:
The model just fails to capture the finer details of the fingers and the head.
I don't understand where the discrepancy could come from. I tried improving on the following aspects:
Nmin=16 b=1.38191 F=2 T=2^19 L=16
. Increasing the maximum resolution or the codebook sizeT
does not really have an impact, except on speed.Especially due to the fact that a pure, big MLP architecture cannot reach the same results as I-NGP, I suspect there are other critical optimizations that I am not taking into account, or there is some gross error I am committing.
I would greatly appreciate it if others with experience with these systems could share their insight. What are improvements that I-NGP is making that I am not?