Closed Csavoldi closed 5 days ago
Pick one: --use-zluda or --use-directml not both. I do recommend Zluda its quite faster.
Corrected: @echo off
set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=--skip-torch-cuda-test --use-directml --skip-python-version-check --api --no-half
call webui.bat
This also works: @echo off
set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS= --use-directml --skip-torch-cuda-test --skip-python-version-check --api --no-half --medvram --precision full --no-half --no-half-vae --opt-split-attention-invokeai --always-batch-cond-uncond --opt-sub-quad-attention --sub-quad-q-chunk-size 512 --sub-quad-kv-chunk-size 512 --sub-quad-chunk-threshold 80 --disable-nan-check --upcast-sampling set SAFETENSORS_FAST_GPU=1
call webui.bat
Unfortunately its still quite slow and most models cause the app to run out of vram.
VAE resolution is 4x time bigger than SD 1.5. So it 4x time slower.
Install the webui with Zluda correctly by using my Automatic1111 Zluda Guide from here: https://github.com/CS1o/Stable-Diffusion-Info/wiki/Installation-Guides
Is there an existing issue for this?
What would your feature do ?
Allow webui-user.bat to execute successfully. Paste below into "webui-user.bat": @echo off
set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS=--skip-torch-cuda-test --use-directml --use-zluda --skip-torch-cuda-test --skip-python-version-check --api --no-half
call webui.bat
Proposed workflow
Additional information
No response