qhduan / just_another_seq2seq

Just another seq2seq repo
329 stars 97 forks source link

how about the training time? #8

Closed yzho0907 closed 6 years ago

yzho0907 commented 6 years ago

It was 10 hours per epoch in my one-GPU workstation(GTX1060). is that normal? or something wrong that slows my training process down? I am just using your offered data-set, thx.

qhduan commented 6 years ago

It's just fine, you could use lesser epoch setting in train.py, or just increase batch_size

lesser epoch or larger batch_size will lead to different result, but I don't know it's a bad thing or not

because there is no way to evaluate result in real world

yzho0907 commented 6 years ago

thx, i just wanna confirm that there was nothing wrong with my settings and we have some GPU clusters which can make the training processes much more faster.