Open shubham-goel opened 2 years ago
Hi.
I am facing these problem too. I can not reproduce results reported in main paper. first I tried configs in this link but maximum accuracy for training dataset was about 35%. after that, I tried to reproduce result by running code with changing hyper parameters such learning rate and iteration number. the best accuracy I reached was 83% for training dataset and 89% for validation dataset.
--how_many_training_steps 200,200,200,200,200,200,200,200,200,200,200,200,200,200,200,200,200,200 --learning_rate 2e-2,1e-2,1e-3,1e-4,1e-5,1e-6,2e-2,1e-2,1e-3,1e-4,1e-5,1e-6,2e-2,1e-2,1e-3,1e-4,1e-5,1e-6'
Hi,
Can you please share the training command (with hyper-parameters) for reproducing numbers in Table 1 of the main paper?
I'm unable to reproduce reported results for implicit_pdf on the SYMSOL1 dataset. The paper (arxiv version) specifies the following hyper-parameters for reproducing results (Section S8):
--symsol_shapes symsol1
)symmetric_solids
dataset in tfds only contains 50k images each, and that's what I train on.The corresponding training run command is:
However, the trained model seems to be overfitting as
gt_log_likelihood
starts reducing forcyl
andcone
after ~3k iterations. Please see the uploaded tensorboard logs. Reducing the depth of the MLP network to the default 2 layers didn't help either.Thanks and Regards, Shubham