VITA-Group / GNT

[ICLR 2023] "Is Attention All NeRF Needs?" by Mukund Varma T*, Peihao Wang* , Xuxi Chen, Tianlong Chen, Subhashini Venugopalan, Zhangyang Wang
https://vita-group.github.io/GNT
MIT License
338 stars 24 forks source link

Questions about proper training epoch #10

Closed YJ-142150 closed 1 year ago

YJ-142150 commented 1 year ago

Hi, thanks for nice work! I trained GNT for 50k~100k epochs without using pretrained model. But the results didn't seemed to be good as pretrained ones. Is there recommended training epochs? And does N_rand affects the performance? The config that I changed was N_rand because of lack of CUDA. Thanks for any assistance.

MukundVarmaT commented 1 year ago

Hi @YJ-142150 Thank you for your interest in our work! I think the configs on the repo are a bit outdated (apologies for that, I will fix that immediately), N_samples must be 192 and we trained for a total of 512 x 8 rays (N_rand) per iteration. I don't think N_rand matters as much if you train for longer but the N_samples do!

I hope this helps? Do let us know if you have any other issues.

YJ-142150 commented 1 year ago

Thanks a lot for your help!