Closed zhuhd15 closed 1 year ago
Hi @zhuhd15,
Although the released models have been trained for longer, I have observed that there is no change in the metric values after 250-300k iterations. It's just that since I had the later checkpoints, we released these checkpoints. Please let me know if you are facing any trouble in reproducing these numbers.
Thank you so much for your prompt reply!
We have downloaded the model you released and we have tested on three cases: 1) single scene for drums 2) generalizable setting for LLFF and 3) generalizable setting for Synthetic dataset. We have seen the numbers as follows using the provided scripts in the repo (drums category as an example):
It seems the retrained model with N_rand as 4096, following your paper, has worse lpips and ssim scores, and their relative gaps are not so small, especially for lpips (~25% relative difference). I wonder if there is anything we can do to reproduce the number of your released model.
Thanks!
We have observed that although GNT renders quite reasonably well (in most cases), places that have a plain background seem to be a shade darker than the ground truth (an inherent drawback of using attention). For example the white background in the case of drums. To verify, please try identifying the background (either using the ground truth mask or using any other segmentation method) and force-setting it to white, and then recomputing the above metrics.
Thanks for your response!
Hi, thanks for the fantastic work!
I've been attempting to replicate the results using the training configurations provided in the repository. However, it appears that the iterations of the pretrained model don't quite align with the instructions in the configs. In your paper, you mentioned training GNT with N_rand set to 4096 for 250k iterations across all examples, while in the released model, it seems that much longer iterations were employed based on the model names (for instance, the fern model was trained for 840k iterations, while the generalization model underwent 720k iterations).
As I attempted to train the models following your configs, I noticed a significant discrepancy compared to the released models. I was wondering if you could possibly update the configurations or training strategies so that we can accurately reproduce the numbers for the model you released. Thank you so much!