lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
7.38k stars 715 forks source link

unrecognized arguments: --lora-dir #1187

Open ViolentVotan opened 1 month ago

ViolentVotan commented 1 month ago

In the most recent version (Version: f2.0.1v1.10.1-previous-304-g394da019 Commit hash: 394da01959ae09acca361dc2be0e559ca26829d4)

I get the following error and also no longer see a lora-dir argument available overall. Was it renamed or is there something wrong?

launch.py: error: unrecognized arguments: --lora-dir

twinnedAI commented 1 month ago

I have a similar issue, I do not get the error message but the loras in the provided path are not shown in the ui. Even if i copy them into the models/Lora folder they will not be displayed in the ui

Tom-Neverwinter commented 1 month ago

can someone supply a log or two. need more information to work with. hardware, os, git repo version etc.

ViolentVotan commented 4 weeks ago

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f2.0.1v1.10.1-previous-313-g8a042934 Commit hash: 8a04293430af3b80760aa0065219256ce0bccc34

Pulled changes for repository in 'F:\AI Stuff\Models\Packages\Forge\stable-diffusion-webui-forge\extensions\adetailer': Already up to date.

Pulled changes for repository in 'F:\AI Stuff\Models\Packages\Forge\stable-diffusion-webui-forge\extensions\sd-dynamic-prompts': Already up to date.

Pulled changes for repository in 'F:\AI Stuff\Models\Packages\Forge\stable-diffusion-webui-forge\extensions\sd-webui-aspect-ratio-helper': Already up to date.

Pulled changes for repository in 'F:\AI Stuff\Models\Packages\Forge\stable-diffusion-webui-forge\extensions\Stable-Diffusion-Webui-Civitai-Helper': Already up to date.

Pulled changes for repository in 'F:\AI Stuff\Models\Packages\Forge\stable-diffusion-webui-forge\extensions\ultimate-upscale-for-automatic1111': Already up to date.

Launching Web UI with arguments: --cuda-malloc --cuda-stream --update-check --update-all-extensions --listen --autolaunch --api --no-half-vae --port 7861 --cors-allow-origins '*' --ckpt-dir 'F:\AI Stuff\Models\Models\StableDiffusion' --hypernetwork-dir 'F:\AI Stuff\Models\Models\Hypernetwork' --embeddings-dir 'F:\AI Stuff\Models\Models\Embeddings' --lora-dir 'F:\AI Stuff\Models\Models\Lora' --vae-dir 'F:\AI Stuff\Models\Models\VAE' --clip-models-path 'F:\AI Stuff\Models\Models\CLIP' --esrgan-models-path 'F:\AI Stuff\Models\Models\ESRGAN' Using cudaMallocAsync backend. Total VRAM 24564 MB, total RAM 64662 MB pytorch version: 2.3.1+cu121 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16 CUDA Using Stream: True usage: launch.py [-h] [--gpu-device-id DEVICE_ID] [--all-in-fp32 | --all-in-fp16] [--unet-in-bf16 | --unet-in-fp16 | --unet-in-fp8-e4m3fn | --unet-in-fp8-e5m2] [--vae-in-fp16 | --vae-in-fp32 | --vae-in-bf16] [--vae-in-cpu] [--clip-in-fp8-e4m3fn | --clip-in-fp8-e5m2 | --clip-in-fp16 | --clip-in-fp32] [--attention-split | --attention-quad | --attention-pytorch] [--force-upcast-attention | --disable-attention-upcast] [--disable-xformers] [--directml [DIRECTML_DEVICE]] [--disable-ipex-hijack] [--always-gpu | --always-high-vram | --always-normal-vram | --always-low-vram | --always-no-vram | --always-cpu] [--always-offload-from-vram] [--pytorch-deterministic] [--cuda-malloc] [--cuda-stream] [--pin-shared-memory] [--update-all-extensions] [--skip-python-version-check] [--skip-torch-cuda-test] [--reinstall-xformers] [--reinstall-torch] [--update-check] [--test-server] [--log-startup] [--skip-prepare-environment] [--skip-install] [--dump-sysinfo] [--loglevel LOGLEVEL] [--do-not-download-clip] [--data-dir DATA_DIR] [--models-dir MODELS_DIR] [--config CONFIG] [--ckpt CKPT] [--ckpt-dir CKPT_DIR] [--vae-dir VAE_DIR] [--gfpgan-dir GFPGAN_DIR] [--gfpgan-model GFPGAN_MODEL] [--no-half] [--no-half-vae] [--no-progressbar-hiding] [--max-batch-count MAX_BATCH_COUNT] [--embeddings-dir EMBEDDINGS_DIR] [--textual-inversion-templates-dir TEXTUAL_INVERSION_TEMPLATES_DIR] [--hypernetwork-dir HYPERNETWORK_DIR] [--localizations-dir LOCALIZATIONS_DIR] [--allow-code] [--medvram] [--medvram-sdxl] [--lowvram] [--lowram] [--always-batch-cond-uncond] [--unload-gfpgan] [--precision {full,half,autocast}] [--upcast-sampling] [--share] [--ngrok NGROK] [--ngrok-region NGROK_REGION] [--ngrok-options NGROK_OPTIONS] [--enable-insecure-extension-access] [--codeformer-models-path CODEFORMER_MODELS_PATH] [--gfpgan-models-path GFPGAN_MODELS_PATH] [--esrgan-models-path ESRGAN_MODELS_PATH] [--bsrgan-models-path BSRGAN_MODELS_PATH] [--realesrgan-models-path REALESRGAN_MODELS_PATH] [--dat-models-path DAT_MODELS_PATH] [--clip-models-path CLIP_MODELS_PATH] [--xformers] [--force-enable-xformers] [--xformers-flash-attention] [--deepdanbooru] [--opt-split-attention] [--opt-sub-quad-attention] [--sub-quad-q-chunk-size SUB_QUAD_Q_CHUNK_SIZE] [--sub-quad-kv-chunk-size SUB_QUAD_KV_CHUNK_SIZE] [--sub-quad-chunk-threshold SUB_QUAD_CHUNK_THRESHOLD] [--opt-split-attention-invokeai] [--opt-split-attention-v1] [--opt-sdp-attention] [--opt-sdp-no-mem-attention] [--disable-opt-split-attention] [--disable-nan-check] [--use-cpu USE_CPU [USE_CPU ...]] [--use-ipex] [--disable-model-loading-ram-optimization] [--listen] [--port PORT] [--show-negative-prompt] [--ui-config-file UI_CONFIG_FILE] [--hide-ui-dir-config] [--freeze-settings] [--freeze-settings-in-sections FREEZE_SETTINGS_IN_SECTIONS] [--freeze-specific-settings FREEZE_SPECIFIC_SETTINGS] [--ui-settings-file UI_SETTINGS_FILE] [--gradio-debug] [--gradio-auth GRADIO_AUTH] [--gradio-auth-path GRADIO_AUTH_PATH] [--gradio-img2img-tool GRADIO_IMG2IMG_TOOL] [--gradio-inpaint-tool GRADIO_INPAINT_TOOL] [--gradio-allowed-path GRADIO_ALLOWED_PATH] [--opt-channelslast] [--styles-file STYLES_FILE] [--autolaunch] [--theme THEME] [--use-textbox-seed] [--disable-console-progressbars] [--enable-console-prompts] [--vae-path VAE_PATH] [--disable-safe-unpickle] [--api] [--api-auth API_AUTH] [--api-log] [--nowebui] [--ui-debug-mode] [--device-id DEVICE_ID] [--administrator] [--cors-allow-origins CORS_ALLOW_ORIGINS] [--cors-allow-origins-regex CORS_ALLOW_ORIGINS_REGEX] [--tls-keyfile TLS_KEYFILE] [--tls-certfile TLS_CERTFILE] [--disable-tls-verify] [--server-name SERVER_NAME] [--gradio-queue] [--no-gradio-queue] [--skip-version-check] [--no-hashing] [--no-download-sd-model] [--subpath SUBPATH] [--add-stop-route] [--api-server-stop] [--timeout-keep-alive TIMEOUT_KEEP_ALIVE] [--disable-all-extensions] [--disable-extra-extensions] [--skip-load-model-at-start] [--unix-filenames-sanitization] [--filenames-max-length FILENAMES_MAX_LENGTH] [--no-prompt-history] [--forge-ref-a1111-home FORGE_REF_A1111_HOME] [--controlnet-dir CONTROLNET_DIR] [--controlnet-preprocessor-models-dir CONTROLNET_PREPROCESSOR_MODELS_DIR] [--ad-no-huggingface] [--scunet-models-path SCUNET_MODELS_PATH] [--swinir-models-path SWINIR_MODELS_PATH] [--controlnet-loglevel {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [--controlnet-tracemalloc] launch.py: error: unrecognized arguments: --lora-dir F:\AI Stuff\Models\Models\Lora Press any key to continue . . .

Windows 11, latest patch level nvidia 4090

here is the contents of the webui-user.bat: @echo off

set PYTHON= set GIT= set VENV_DIR= set COMMANDLINE_ARGS= --cuda-malloc --cuda-stream --update-all-extensions --listen --autolaunch --api --no-half-vae --port 7861 --cors-allow-origins *

@REM Uncomment following code to reference an existing A1111 checkout. @REM set A1111_HOME=Your A1111 checkout dir @REM @REM set VENV_DIR=%A1111_HOME%/venv set COMMANDLINE_ARGS=%COMMANDLINE_ARGS% ^ --ckpt-dir "F:\AI Stuff\Models\Models\StableDiffusion" ^ --hypernetwork-dir "F:\AI Stuff\Models\Models\Hypernetwork" ^ --embeddings-dir "F:\AI Stuff\Models\Models\Embeddings" ^ --lora-dir "F:\AI Stuff\Models\Models\Lora" ^ --vae-dir "F:\AI Stuff\Models\Models\VAE" ^ --clip-models-path "F:\AI Stuff\Models\Models\CLIP" ^ --esrgan-models-path "F:\AI Stuff\Models\Models\ESRGAN"

call webui.bat

This has worked for days now without any modification and only broke on the day I posted the issue, before the lora-dir worked perfectly

twinnedAI commented 4 weeks ago

@Tom-Neverwinter please let me know if I should open a new isse. The "lora" tab is empty for me, but checkpoints tab is working:

commit 8a04293430af3b80760aa0065219256ce0bccc34 Ubuntu 22.04.4 LTS x86_64 RTX 4060 Ti

(forge_venv) user@user-linux:~/development/stable-diffusion-webui-forge$ ./webui.sh 

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on user user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
python venv already activate or run without venv: /home/user/development/stable-diffusion-webui-forge/forge_venv
################################################################

################################################################
Launching launch.py...
################################################################
glibc version is 2.35
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
Version: f2.0.1v1.10.1-previous-313-g8a042934
Commit hash: 8a04293430af3b80760aa0065219256ce0bccc34
Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
Launching Web UI with arguments: --listen --ckpt-dir '~/development/ComfyUI/models/checkpoints' --lora-dir '~/development/ComfyUI/models/loras'
Total VRAM 15952 MB, total RAM 64221 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: /home/user/development/stable-diffusion-webui-forge/models/ControlNetPreprocessor
2024-08-17 11:09:27,912 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': '/home/user/development/stable-diffusion-webui-forge/models/Stable-diffusion/checkpoints/flux/flux1-dev-fp8.safetensors', 'hash': 'be9881f4'}, 'additional_modules': [], 'unet_storage_dtype': None}
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 12.2s (prepare environment: 2.5s, launcher: 2.0s, import torch: 2.5s, initialize shared: 0.5s, other imports: 1.0s, load scripts: 1.2s, create ui: 1.6s, gradio launch: 1.0s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}
ViolentVotan commented 4 weeks ago

@Tom-Neverwinter please let me know if I should open a new isse. The "lora" tab is empty for me, but checkpoints tab is working:

commit 8a04293 Ubuntu 22.04.4 LTS x86_64 RTX 4060 Ti

(forge_venv) user@user-linux:~/development/stable-diffusion-webui-forge$ ./webui.sh 

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on user user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
python venv already activate or run without venv: /home/user/development/stable-diffusion-webui-forge/forge_venv
################################################################

################################################################
Launching launch.py...
################################################################
glibc version is 2.35
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
Version: f2.0.1v1.10.1-previous-313-g8a042934
Commit hash: 8a04293430af3b80760aa0065219256ce0bccc34
Legacy Preprocessor init warning: Unable to install insightface automatically. Please try run `pip install insightface` manually.
Launching Web UI with arguments: --listen --ckpt-dir '~/development/ComfyUI/models/checkpoints' --lora-dir '~/development/ComfyUI/models/loras'
Total VRAM 15952 MB, total RAM 64221 MB
pytorch version: 2.3.1+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4060 Ti : native
Hint: your device supports --cuda-malloc for potential speed improvements.
VAE dtype preferences: [torch.bfloat16, torch.float32] -> torch.bfloat16
CUDA Using Stream: False
Using pytorch cross attention
Using pytorch attention for VAE
ControlNet preprocessor location: /home/user/development/stable-diffusion-webui-forge/models/ControlNetPreprocessor
2024-08-17 11:09:27,912 - ControlNet - INFO - ControlNet UI callback registered.
Model selected: {'checkpoint_info': {'filename': '/home/user/development/stable-diffusion-webui-forge/models/Stable-diffusion/checkpoints/flux/flux1-dev-fp8.safetensors', 'hash': 'be9881f4'}, 'additional_modules': [], 'unet_storage_dtype': None}
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 12.2s (prepare environment: 2.5s, launcher: 2.0s, import torch: 2.5s, initialize shared: 0.5s, other imports: 1.0s, load scripts: 1.2s, create ui: 1.6s, gradio launch: 1.0s).
Environment vars changed: {'stream': False, 'inference_memory': 1024.0, 'pin_shared_memory': False}

well yes I'd say so given it is a completely different issue and unrelated.

bkosowski commented 17 hours ago

As of 210af4f80406f78a67e1c35a64a6febdf1200a82 I can change the lora directory through the --lora-dir switch.

@ViolentVotan please check and close the issue if it works for you.