chrisdonahue / wavegan

WaveGAN: Learn to synthesize raw audio with generative adversarial networks
MIT License
1.33k stars 280 forks source link

setup on colab #73

Open MAnal0025 opened 4 years ago

MAnal0025 commented 4 years ago

This Is my first time to run this project , after reading all your the requirement I decided to run your project on google colab, I choose colab because it gives me free GPU option. Is it ok to run on colab ? or guide me regarding to setting up this project THANK YOU!

jvel07 commented 4 years ago

Hi, I am also running it in Colab. It's pretty easy to manage running it there:

Tylersuard commented 4 years ago

Answered my question, thank you :)

moih commented 4 years ago

Have you had success training on Google Colab?

When I try to run the code, it warns an error because of tensorflow version...would anyone here mind sharing their colab notebook?

Thanks!

Tylersuard commented 4 years ago

I'm getting the tensorflow version error too.

Tylersuard commented 4 years ago

If I use a version of Tensorflow 1, I get the error "tensorflow.data has no attribute: experimental" and if I use tensorflow 2 I get: tensorflow has no atrribute: placeholder.

jvel07 commented 4 years ago

Hi, wavegan was designed with TF 1.12.0 (as per documentation). However, I managed to run it using TF 1.15. You may want to try this on Colab before running your experiments: %tensorflow_version 1.x import tensorflow Then check the version: print(tensorflow.__version__) If the version is still on TF 2.0 (Colab's default) then reset the environment.

Tylersuard commented 4 years ago

Thank you! I got the training to work.

moih commented 4 years ago

On my side I get the training to execute by installing Tensorflow 1.15 but it soon crashes, mentioning a couple of numpy modules not found. Could I see your running colab session?

Sent from my iPhone

On 9. Apr 2020, at 01:21, Tyler notifications@github.com wrote:

 Thank you! I got the training to work.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.

Tylersuard commented 4 years ago

@moih Absolutely! https://colab.research.google.com/drive/1N0CtpO6VZvcyE72r3eEkBQthQsUlm6Hf

moih commented 4 years ago

@Tylersuard thanks, working now for me! just curious, how are you managing to download the checkpoints?

Tylersuard commented 4 years ago

I'm downloading the checkpoints manually, which is probably not the best way. There's a command in the docs for how to save them automatically though.

moih commented 4 years ago

Here's a modified version of your notebook to include saving checkpoints to your google Drive directly: https://colab.research.google.com/drive/1oZBt78G2TSnRq4IY194dW3IfuVYNsT-B

Tylersuard commented 4 years ago

Thank you sir!

Tylersuard commented 4 years ago

@moih were you able to get generation to work after training the model?

moih commented 4 years ago

Hi, only when I download the whole results folder and do the generation directly from my computer, not in the generator code provided as example... let me know if you can manage to do it yourself.

Sent from my iPhone

On 14. Apr 2020, at 05:13, Tyler notifications@github.com wrote:

 @moih were you able to get generation to work after training the model?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

ben-hawks commented 4 years ago

Hey! Figured I'd throw my 2 Cents in, I've managed to successfully train and generate samples using google colab. Couple of things I found out along the way:

Hope this helps!

mikemech83 commented 3 years ago

Hey! Figured I'd throw my 2 Cents in, I've managed to successfully train and generate samples using google colab. Couple of things I found out along the way:

* Set up everything in google drive, that way if the collab session times out for one of many reasons, your checkpoint files are still saved.

* Depending on how much storage you have in your drive account (if you only have the default, free 15GB, for example) the "deletion" of checkpoint files as the training progresses **does not** actually delete the files from your drive account. It puts them into your "trash" folder which **still counts against your drive storage limit**. As far as I could find there's no way to change/disable this behavior, so my solution (as hacky as it might be) was to have a second colab session running deleting files from the trash regularly using [PyDrive](https://pythonhosted.org/PyDrive/)

* On the same note, if you do run out of google drive space, things in my experience fail silently, with the checkpoint files themselves not being saved into your drive until you clear the trash. I've lost checkpoints due to this, so be mindful about it.

* Generation worked more or less with the exact example generation code in the Readme (making sure to use `%tensorflow_version 1.x`), the only modification needing to be made is changing the name of what checkpoint you're using, for example:
  `saver.restore(sess, 'drive/My Drive/colab/wavegan/train/model.ckpt-XXXX')`
  where XXXX is the actual number checkpoint you're attempting to load

* You can also run the provided backup script in another colab instance while connected to your drive

Hope this helps!

This was helpful for sure! For anyone else who runs into this. If you try and use the colab notebook from the readme, as opposed to the snippet provided to produce a single clip, one thing I ran into was there being no tensor called 'G_z_spec:0'. However, if you compare w/ the logic from the readme, you can see that the G_z_spec part is not needed at all. I'm guessing G_z_spec might be for the spectral GAN? Anyway, simply comment out the G_z_spec line and change _G_z, _G_z_spec = sess.run([G_z, G_z_spec], {z: _z}) to _G_z = sess.run(G_z, {z: _z})

And comment display(PIL.Image.fromarray(_G_z_spec[i])) and it works perfectly :)

pryda-snare commented 3 years ago

Hey! Figured I'd throw my 2 Cents in, I've managed to successfully train and generate samples using google colab. Couple of things I found out along the way:

* Set up everything in google drive, that way if the collab session times out for one of many reasons, your checkpoint files are still saved.

* Depending on how much storage you have in your drive account (if you only have the default, free 15GB, for example) the "deletion" of checkpoint files as the training progresses **does not** actually delete the files from your drive account. It puts them into your "trash" folder which **still counts against your drive storage limit**. As far as I could find there's no way to change/disable this behavior, so my solution (as hacky as it might be) was to have a second colab session running deleting files from the trash regularly using [PyDrive](https://pythonhosted.org/PyDrive/)

* On the same note, if you do run out of google drive space, things in my experience fail silently, with the checkpoint files themselves not being saved into your drive until you clear the trash. I've lost checkpoints due to this, so be mindful about it.

* Generation worked more or less with the exact example generation code in the Readme (making sure to use `%tensorflow_version 1.x`), the only modification needing to be made is changing the name of what checkpoint you're using, for example:
  `saver.restore(sess, 'drive/My Drive/colab/wavegan/train/model.ckpt-XXXX')`
  where XXXX is the actual number checkpoint you're attempting to load

* You can also run the provided backup script in another colab instance while connected to your drive

Hope this helps!

This was helpful for sure! For anyone else who runs into this. If you try and use the colab notebook from the readme, as opposed to the snippet provided to produce a single clip, one thing I ran into was there being no tensor called 'G_z_spec:0'. However, if you compare w/ the logic from the readme, you can see that the G_z_spec part is not needed at all. I'm guessing G_z_spec might be for the spectral GAN? Anyway, simply comment out the G_z_spec line and change _G_z, _G_z_spec = sess.run([G_z, G_z_spec], {z: _z}) to _G_z = sess.run(G_z, {z: _z})

And comment display(PIL.Image.fromarray(_G_z_spec[i])) and it works perfectly :)

Still can't get the generation running. Would you mind sharing a notebook? :)