Closed ryunuri closed 1 year ago
Hi, thanks for the valuable comments!
About the initialization of bias in linear layers, I think the current implementation achieves the same effect as the original code you mentioned, as the bias is set to 0
in intermediate layers and set to -bias
in the last layer which is done by sdf_bias
in our implementation. The only difference is that I set sdf_bias=0
instead of sdf_bias=-0.5
in the original setting, which leads to a smaller sphere.
I also found that setting sdf_bias=-0.6
could result in bad training results as you showed. It seems like a training instability problem and could be solved by adopting learning rate warm-up like in the original NeuS implementation. I already updated the config file to support such a warm-up strategy, please have a try!
Here I compare sdf_bias=0, w/o warm-up
and sdf_bias=-0.5, w/ warm-up
on the Lego scene. It seems that the latter achieves higher quality:
Thanks for your feedback!
I tried out the warm-up strategy and the quality was quite improved on my custom dataset. However, I'm still having some difficulties on extracting a high quality mesh out of it. I guess I'll try out some more experiments and share the results if I see some improvements.
Hi, thanks for sharing your code.
I've been trying out several things and found something weird. When using sphere initialization of the vanilla MLP, I expected the initial shape to be a sphere. If you render the outputs of the initialized model by setting val_check_interval=1, the images (rgb, normal, depth) indeed resemble a sphere.
However, the marching cubes fail with the following error message
I guess this means that the aabb cube is empty.
When I looked into the code, I found that the VanillaMLP does not initialize the constants of the layers, which is different from the initialization of the paper "SAL: Sign Agnostic Learning of Shapes from Raw Data".
I think the
make_linear
function should be as followsAlso, from
forward
andforward_level
methods in classVolumeSDF
The if statement is
True
even when you simply setsdf_activation
toNone
in the config, since it's still in the config. I found that this leads the sdf values to be all positive at the start of training. I just removed thesdf_activation
in the config.After changing this part and setting the bias of the SDF to 0.6, the initial model output is as follows:
And the result of marching cubes is indeed a sphere.
However, I found that by changing the model like this results in very poor training results.
After 1000 iterations,
Also, the mesh is completely broken
So, I guess you had a reason for this design choice? Otherwise, I think this might be the reason why training the model on my custom dataset fails.