fdarmon / NeuralWarp

Code release of paper Improving neural implicit surfaces geometry with patch warping
237 stars 14 forks source link

Question about the two-stage training #5

Closed o0Helloworld0o closed 2 years ago

o0Helloworld0o commented 2 years ago

Dear author, According to the description of the paper, the training pipeline includes two stages. First, train for 100k iterations in the same setting as VolSDF. Then finetune for 50k iterations with the proposed method. Does this mean that I need to add the option "--is_continue --timestamp XXXXX" in stage 2? Moreover, according to the paper, the learning rate of stage 2 is 1e-5, which is different from the learning rate (5.0e-4) in NeuralWarp.conf. Do I need to change the learning rate in the configuration to 1e-5? Thanks!

fdarmon commented 2 years ago

Hello,

There is no need to add --is_continue --timestamp XXX because there is finetune_exp = baseline in NeuralWarp.conf. It means that the model will reload weights from baseline model at the beginning of training.

About the learning rate, thanks for pointing out this mistake in the paper, I will update it to correct the learning rate. All the results from the paper and the pretrained models are trained with the exact same configuration as provided in the repo. You may try with 1e-5 for finetuning I do not expect it to change the results much.