Open hskAlena opened 1 year ago
This is the rendered output at 200K step in training setting 1 (20 images). The final PSNR is 14.60 when I add weight decay in Adam optimizer and 11. 25 when I add nothing.
Similarly, I got this output in the same setting, with PSNR 18.73 in weight decaying and 13 without weight decaying.
How can I improve these outputs to get average PSNR 26?
Hi hskAlena, we are also experiencing a similar issue. We got PSNR=25 on the chair but PSNR=15 on the lego. Did you experience a similar issue in those scenes?
Ohh mine is different. I got PSNR=18 on the chair and PSNR=21 on the lego. Is your setting similar to mine? Can you please share your setting? Also, I used seed 0 in the results above.
@hskAlena How do you choose the initial views ?
The original code seems to use 20 initial views according to data/hotdog/transforms_train.json.
However, the paper says they used 4 initial views, so I used the same method they used in LLFF dataset. https://github.com/LeapLabTHU/ActiveNeRF/blob/83f1329c0d9c49e4e11ca1d23dd17cf184625d28/run_nerf.py#L596
Therefore, I modified transforms_train.json and used the very first 4 views, ("./train/r_0", "./train/r_1", "./train/r_2", "./train/r_3"). And I added 4~19 to transforms_holdout.json.
OK. We use every 5 images in the first 20 images in the training split. ("./train/r_0", "./train/r_5", "./train/r_10", "./train/r_15"). I think that's the reason we produce different results.
@RPFey @Panxuran @LeapLabTHU How many iterations did you train the model for the evaluation? I evaluated the model at 200k iteration. Also, did you use the whole 200 test set in the synthetic scene?
We train for 200k iterations. Test skip is 8. We pick one every 8 images in test set. The total number is 25.
@hskAlena Hi, have you reproduced the results in the paper ?
Nope. I couldn't reproduce the results with the given information. However, I made the results similar to the reported one by changing the model, adjusting the hyperparameters, and changing the training scheme. I changed almost everything except the acquisition function and the uncertainty loss (hmm.. I changed the coefficients too) Haha. I don't know if I can call this "reproduced"...
Hi @RPFey, I'm experiencing the same issue with reproducing the result in the table. Could you provide the hyparams used to reproduce the result? Also it seems that the training variance among different seeds is big. Sometimes the model will fail to converge with different seeds. Is this normal? Thanks for the clarification!
Hello, I'm trying to reproduce the synthetic scene results in setting 1, but I can't. I believe the released code is ActiveNeRF-CL, right? Could you check the config settings I used? The average PSNR I got is 13...