chrisdonahue / wavegan

WaveGAN: Learn to synthesize raw audio with generative adversarial networks
MIT License
1.33k stars 282 forks source link

Epoch size #92

Open halameri opened 4 years ago

halameri commented 4 years ago

How can I modify The iteration or epoch size to reduce the training time

spagliarini commented 4 years ago

Hi! Do you mean how to save more frequently?

If so, you can change the parameter train_save_secs

If you change this parameter, and you still want your loss being saved and plot ~at the same time (when you visualize it on Tensorboard) you need to change also the parameter train_summary_secs such that train_save_secs=train_summary_secs.

halameri commented 4 years ago

thank you for your response @spagliarini I don't want save more frequently ,I want to reduce the epochs or iterations size because the training process take a long time (200k iterations)

spagliarini commented 4 years ago

I see. Then, until now the only way to stop the training I found is manual. So you just need to stop it earlier than 200k iterations. Just make sure that the generator is performing well enough by checking the preview. In #63 it was mentioned that good results were already obtained after 100k iterations, or earlier.

Actually, this is the first time that I deal with Tensorflow and for what I found in the Tensorflow documentation it is possible to automatically stop the training session fixing a threshold for the loss. But I couldn't find a good one. Are you more familiar with Tensorflow? Is there a way to stop the training that is based on the number of iterations?