Thanks for a very fantastic Project.
I tried it right away, but my GPU only has 6GB of VRAM, so I couldn't generate the audio file due to a CUDA error that said I was missing almost 3GB of VRAM.
So I changed the following line in the import cell of the riffusion.ipynb file
pipe = DiffusionPipeline.from_pretrained("riffusion/riffusion-model-v1")
in the import cell of the riffusion/riffusion-model-v1 file
pipe = DiffusionPipeline.from_pretrained("riffusion/riffusion-model-v1",torch_dtype=torch.float16)
and further change #@title Define a Define a predict function to
width=768,
to
width=656,
I was able to create a music file of 6 s 550 ms on my GTX1060 with 6GB of VRAM.
The sound quality of the sound file I was able to create sounds the same as the sample from the official site.
By the way, no matter how small I made width=, VRAM was not enough, so changing the floating point precision was definitely effective.
Is there any problem with changing the floating point precision?
If not, I'd suggest defaulting to 16 floating point to save VRAM and allow more people to experience this wonderful Project.
Thanks for a very fantastic Project. I tried it right away, but my GPU only has 6GB of VRAM, so I couldn't generate the audio file due to a CUDA error that said I was missing almost 3GB of VRAM. So I changed the following line in the import cell of the riffusion.ipynb file pipe = DiffusionPipeline.from_pretrained("riffusion/riffusion-model-v1") in the import cell of the riffusion/riffusion-model-v1 file pipe = DiffusionPipeline.from_pretrained("riffusion/riffusion-model-v1",torch_dtype=torch.float16) and further change #@title Define a Define a
predict
function to width=768, to width=656, I was able to create a music file of 6 s 550 ms on my GTX1060 with 6GB of VRAM. The sound quality of the sound file I was able to create sounds the same as the sample from the official site. By the way, no matter how small I made width=, VRAM was not enough, so changing the floating point precision was definitely effective. Is there any problem with changing the floating point precision? If not, I'd suggest defaulting to 16 floating point to save VRAM and allow more people to experience this wonderful Project.