Closed yzho0907 closed 6 years ago
It's just fine, you could use lesser epoch setting in train.py, or just increase batch_size
lesser epoch or larger batch_size will lead to different result, but I don't know it's a bad thing or not
because there is no way to evaluate result in real world
thx, i just wanna confirm that there was nothing wrong with my settings and we have some GPU clusters which can make the training processes much more faster.
It was 10 hours per epoch in my one-GPU workstation(GTX1060). is that normal? or something wrong that slows my training process down? I am just using your offered data-set, thx.