Open Tears5Fears opened 7 years ago
My other hypothesis is that your provided model is sufficiently large so that memory size is a bottleneck. But you mention that you've been able to fit your model on a GTX 1060 GPU, so I'm unsure whether that's the issue. Perhaps a simpler convolutional model would work better on my setup.
Running
python acapellabot.py sample.mp3 --weights weights.h5
on the pretrained model works on CPU's, but crashes on GPU's due to memory overflows. I suspect it's something to do with the concatenation steps within the keras model.Tested on a Tesla K80:
Tested on a GTX 1070: