Closed zengyh1900 closed 6 years ago
Hi it is a good question. You are right, basically one epoch means running through the training dataset once. However in some cases, like if you sample a small patch in an image (for tasks like inpainting, super-resolution, etc.), you don't know how to measure one epoch. Also sometimes you just want to save the model frequently (like for every 0.5 epoch), it is better to define an epoch as how many steps to run as a whole unit.
Thus I use SPE
as steps per epoch
so that you can define how many iterations you want as a running unit. Just some preferences in developing.
Again, you are right. An epoch means running through the training dataset once.
@JiahuiYu Thank you for your fast reply very much!
Does this mean that the train always covers all train dataset? Just like this:
Yes, it is also possible that one SPE is larger than the length of train dataset.
Yes, it is also possible that one SPE is larger than the length of train dataset.
Thank you very much!
Hi Jiahui,
Thanks for your toolkit, it's really helpful !
In my world, an epoch means running through the training dataset once. So I'm not sure why you set an epoch as TRAIN_SPE steps ignoring the size of training dataset.
see progress_logger