chrisdonahue / wavegan

WaveGAN: Learn to synthesize raw audio with generative adversarial networks
MIT License
1.32k stars 283 forks source link

Checkpoints and global step #20

Closed spagliarini closed 5 years ago

spagliarini commented 5 years ago

Hi! Interesting work!

I'm trying to train the wavegan using the speech dataset (09) you used here. I have a (possibly very naive) doubt.

I run

python train_wavegan.py train ./train --data_dir data

When I look at the outcomes I see that the checkpoints come enumerated by the global step (right?). Does the global state then correspond to the number of epochs the generator has been trained for?

As example: model.ckpt-497

Thank you!

chrisdonahue commented 5 years ago

The global step corresponds to the number of steps the generator has been trained for. The discriminator is trained 5x per generator update. Each batch has 64 examples and there are ~18.6k training examples. Hence, step 497 is epoch 497 5 64 / 18600 = ~8.5. Now that I'm looking at the paper, I think I calculated the epoch amounts incorrectly (will fix in future versions). The iteration counts are accurate though (e.g. our sc09 networks were trained for 200k iterations).