Open abujr101 opened 12 hours ago
Try downgrading to torch 2.2.1. RX580 are quote old and do not support FP16.
env\python -m pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu118
patch Zluda after that
Project Version
3.2.6
Platform and OS Version
windows 11
Affected Devices
Amd RX 580
Existing Issues
No response
What happened?
i am, on amd gpu . my gpu shader ISA is GFx803. I follow all the instrustion for amd gpu . but i am getting this error when inferencing
To create a public link, set
return self._apply(lambda t: t.half() if t.is_floating_point() else t)
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with
share=True
inlaunch()
. Traceback (most recent call last): File "D:\Applio-3.2.6\env\lib\site-packages\gradio\queueing.py", line 536, in process_events response = await route_utils.call_process_api( File "D:\Applio-3.2.6\env\lib\site-packages\gradio\route_utils.py", line 321, in call_process_api output = await app.get_blocks().process_api( File "D:\Applio-3.2.6\env\lib\site-packages\gradio\blocks.py", line 1935, in process_api result = await self.call_function( File "D:\Applio-3.2.6\env\lib\site-packages\gradio\blocks.py", line 1520, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "D:\Applio-3.2.6\env\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "D:\Applio-3.2.6\env\lib\site-packages\anyio_backends_asyncio.py", line 2405, in run_sync_in_worker_thread return await future File "D:\Applio-3.2.6\env\lib\site-packages\anyio_backends_asyncio.py", line 914, in run result = context.run(func, args) File "D:\Applio-3.2.6\env\lib\site-packages\gradio\utils.py", line 826, in wrapper response = f(args, **kwargs) File "D:\Applio-3.2.6\core.py", line 192, in run_infer_script infer_pipeline.convert_audio( File "D:\Applio-3.2.6\rvc\infer\infer.py", line 254, in convert_audio self.get_vc(model_path, sid) File "D:\Applio-3.2.6\rvc\infer\infer.py", line 435, in get_vc self.setup_network() File "D:\Applio-3.2.6\rvc\infer\infer.py", line 487, in setup_network self.net_g.half() if self.config.is_half else self.net_g.float() File "D:\Applio-3.2.6\env\lib\site-packages\torch\nn\modules\module.py", line 1011, in half return self._apply(lambda t: t.half() if t.is_floating_point() else t) File "D:\Applio-3.2.6\env\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply module._apply(fn) File "D:\Applio-3.2.6\env\lib\site-packages\torch\nn\modules\module.py", line 779, in _apply module._apply(fn) File "D:\Applio-3.2.6\env\lib\site-packages\torch\nn\modules\module.py", line 804, in _apply param_applied = fn(param) File "D:\Applio-3.2.6\env\lib\site-packages\torch\nn\modules\module.py", line 1011, inTORCH_USE_CUDA_DSA
to enable device-side assertions.