Closed nivha closed 3 years ago
@nivha Did it take only 25 hours to train the model for 1000 epochs? May I ask what gpu you used? By the way, accroding to the paper, it seems like they used 1000 epochs for DIV2K dataset, while using 200 epochs for CelebA dataset.
Hi @DongHwanJang it actually took about 31 hours to train the model for 1000 epochs. When I wrote the question it only started to train, and since it was not the point of my question I accidentally thought it's 1.5 min per epoch, when it's more like 2 mins per epoch. I used RTX8000. You're right about the 1000 epochs! thanks
@nivha hello,may i ask which .yaml you use to train, because one epoch costs about 7 min on RTX3090. Thanks !
@xiximelon Hi, it was two years ago.. I think it was on Div2K on RTX8000 as I wrote. But unfortunately I don't remember more then that
@xiximelon Hi, it was two years ago.. I think it was on Div2K on RTX8000 as I wrote. But unfortunately I don't remember more then that
Thanks for reply!
Hi, very cool work! In your paper you write that you train for 200 epochs, but in the config files (included in this repo) you have 1000 epochs. Should there be a big difference between the two options? In terms of runtime it matters a lot... 25 vs 5 hours training time. I wonder if the final quality also changes. Thanks!