lshqqytiger / stable-diffusion-webui-amdgpu

Stable Diffusion web UI
GNU Affero General Public License v3.0
1.77k stars 182 forks source link

[Bug]: CUDA: out of memory before even loading any models #524

Open XD1674 opened 1 month ago

XD1674 commented 1 month ago

Checklist

What happened?

i wanted to enable zluda, and for some reason it says cuda: out of memory, while i used --lowram and i have 8gb of vram. i also searched on the internet like everywhere but found nothing. maybe its because i tried to download zluda with rocm 5.7.1 on an old amd rx 570, but im not sure.

Steps to reproduce the problem

no idea, for everyone else it works apparently

What should have happened?

it should work normally, i dont get this error, also i should be fine with 24gb ram (i use operagx)

What browsers do you use to access the UI ?

Other

Sysinfo

sysinfo-2024-08-14-20-07.json

Console logs

venv "D:\stable diffusion\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
ROCm Toolkit 5.7 was found.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-2-g395ce8dc
Commit hash: 395ce8dc2cb01282d48074a89a5e6cb3da4b59ab
Using ZLUDA in D:\stable diffusion\stable-diffusion-webui-directml\.zluda
WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
  File "D:\stable diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\xformers\__init__.py", line 57, in _is_triton_available
    import triton  # noqa
ModuleNotFoundError: No module named 'triton'
D:\stable diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --no-download-sd-model --lowvram --opt-sdp-attention --opt-sub-quad-attention --precision full --no-half
ONNX failed to initialize: Failed to import diffusers.pipelines.auto_pipeline because of the following error (look up to see its traceback):
Failed to import diffusers.pipelines.aura_flow.pipeline_aura_flow because of the following error (look up to see its traceback):
cannot import name 'UMT5EncoderModel' from 'transformers' (D:\stable diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\__init__.py)
ZLUDA device failed to pass basic operation test: index=None, device_name=Radeon RX 570 Series [ZLUDA]
CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Additional information

i have messed a bit with rocm, but other than that, nothing else. i have done 2 clean reinstalls and i still get this error

lshqqytiger commented 1 month ago

HIP SDK has a bug with RX 500 cards. (pre-navi) It throws OOM even if the memory is not full.

CS1o commented 1 month ago

Multiple Users fixed that with this step:

Go into the stable-diffusion-webui-amdgpu folder and click in the explorer bar (not searchbar) There Type cmd and hit enter.Then type and run these three commands on by one: venv\scripts\activate pip uninstall torch torchvision torchaudio -y pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu118