Open zejacky opened 1 year ago
did you download the ckpt for riffusion
Thank you very much for my missing puzzle piece :). I was downloading it now from https://huggingface.co/riffusion/riffusion-model-v1/blob/main/riffusion-model-v1.ckpt and it works !
np! glad i could help
Same issue, but I have the model, and it says at the end of the command line output 'torch not compiled with cuda enabled'. Everything else works, and the image comes out fine, but when I use an online converter, I get these weird sounds. logs here:
Found 1 images in C:\Users\ryano\Music\stable-diffusion-webui-directml\outputs\txt2img-images\2023-08-09, pattern .jpg, .png Traceback (most recent call last): File "C:\Users\ryano\Music\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict output = await app.get_blocks().process_api( File "C:\Users\ryano\Music\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api result = await self.call_function( File "C:\Users\ryano\Music\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\ryano\Music\stable-diffusion-webui-directml\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\ryano\Music\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\Users\ryano\Music\stable-diffusion-webui-directml\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, *args) File "C:\Users\ryano\Music\stable-diffusion-webui-directml\extensions\sd-webui-riffusion\scripts\riffusion.py", line 364, in convert_audio output_files.append(convert_audio_image(image, image_file, image_dir, width)) File "C:\Users\ryano\Music\stable-diffusion-webui-directml\extensions\sd-webui-riffusion\scripts\riffusion.py", line 333, in convert_audio_image wav_bytes, duration_s = riffusion.wav_bytes_from_spectrogram_image(image_file) File "C:\Users\ryano\Music\stable-diffusion-webui-directml\extensions\sd-webui-riffusion\scripts\riffusion.py", line 186, in wav_bytes_from_spectrogram_image samples = self.waveform_from_spectrogram( File "C:\Users\ryano\Music\stable-diffusion-webui-directml\extensions\sd-webui-riffusion\scripts\riffusion.py", line 291, in waveform_from_spectrogram Sxx_torch = torch.from_numpy(Sxx).to(device) File "C:\Users\ryano\Music\stable-diffusion-webui-directml\venv\lib\site-packages\torch\cuda__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
Thank you for your work! And happy new year.
I have a "problem" with automatic1111 and the extension "sd-webui-riffusion". I installed it like described. Also added the ENV Variable to boot System and User PATH. (D:\ffmpeg\bin) The Extension appears booth in the webui Tab and under Scripts.
If I convert a picture to audio from the Tab or via script, there is no error in the CMD. However, if I listen to the audio file, it sounds "corrupted" and like some strange noise (robotic), not like music. (See attachment)
I think Im doing something wrong, or I don't understand how it works. Maybe you have already an idea. Sound Device: Realtek HD audio / Nvidia HD audio Windows 11 x64
Thank you very much for helping me out.
https://user-images.githubusercontent.com/119687458/210222287-11a688c1-5edb-46eb-9c89-6711222b1cbe.mp4