Open XanaDublaKublaConch opened 3 years ago
Try decreasing the batch size. From the README:
BigGAN generates the images in batches of size [batch_size]. The only reason to reduce batch size from the default of 30 is if you run out of CUDA memory on a GPU. Reducing the batch size will slightly increase overall runtime.
Default: 30
Example:
python visualize.py --song beethoven.mp3 --batch_size 20
In my case, with a RTX 3070 with only 8 GB of VRAM, I needed to decrease the batch size to 10.
I followed the suggestion of the README and tried this out on the recommended gcloud VM, but I can't get a 512 size to generate. Is there a workaround (multiple GPUs?) I'm not familiar with pytorch so I don't know if this is a VM sizing issue or a problem with how this project batches data to CUDA memory or something. My audio is a little over 3 minutes.