enlyth / sd-webui-riffusion

Riffusion extension for AUTOMATIC1111's SD Web UI
MIT License
195 stars 23 forks source link

nvrtc: error: invalid value for --gpu-architecture (-arch) #6

Open JousterL opened 1 year ago

JousterL commented 1 year ago

Been trying to use the plugin and keep running into a failure on the conversion of images. Traceback is below:

Using a 4090, so I thought maybe it's that the torch CUDA version was too low (other plugins use 116 instead of 113). Tried to amend the statement but still no luck. Hoping it's something simple I'm overlooking.

Traceback (most recent call last):
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict
    output = await app.blocks.process_api(
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api
    result = await self.call_function(fn_index, inputs, iterator)
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\extensions\sd-webui-riffusion\scripts\riffusion.py", line 344, in convert_audio
    output_files.append(convert_audio_file(image, image_dir))
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\extensions\sd-webui-riffusion\scripts\riffusion.py", line 325, in convert_audio_file
    wav_bytes, duration_s = riffusion.wav_bytes_from_spectrogram_image(image_file)
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\extensions\sd-webui-riffusion\scripts\riffusion.py", line 184, in wav_bytes_from_spectrogram_image
    samples = self.waveform_from_spectrogram(
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\extensions\sd-webui-riffusion\scripts\riffusion.py", line 315, in waveform_from_spectrogram
    waveform = griffin_lim(Sxx_torch).cpu().numpy()
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\venv\lib\site-packages\torchaudio\transforms\_transforms.py", line 280, in forward
    return F.griffinlim(
  File "D:\AI\stable-diffusion-webui-master.bak\stable-diffusion-webui\venv\lib\site-packages\torchaudio\functional\functional.py", line 306, in griffinlim
    angles = angles.div(angles.abs().add(1e-16))
enlyth commented 1 year ago

I am not sure what is causing this to be honest, personally I'm on a 3090 with 113

FWIW I think Griffin Lim is fast enough to be done on the CPU, try modifying the code and passing device="cpu" to waveform_from_spectrogram to see if it works and how long it takes

JousterL commented 1 year ago

That did allow me to proceed. Took about 45 seconds to do 8 files converted over on my 7950x.