vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.71k stars 423 forks source link

[Issue]: Naming error when trying to install and run Flux qint4 #3404

Closed SAC020 closed 2 months ago

SAC020 commented 2 months ago

Issue Description

When trying to install Flux qint4 I get the error below, then it installs both Flux dev and Flux qint4 and reverts to Flux dev when attempting to generate T2I

08:28:42-532859 ERROR Loading FLUX: Failed to load Quanto transformer: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.

This is what it installs in the diffusers folder

image

Flux dev is unusable due to high VRAM consumption but this is known already.

Full log (model installation + attempting T2I) below

Version Platform Description

08:22:55-184036 INFO Python version=3.11.9 platform=Windows bin="C:\ai\automatic\venv\Scripts\python.exe" venv="C:\ai\automatic\venv" 08:22:55-378508 INFO Version: app=sd.next updated=2024-09-02 hash=bba17766 branch=dev url=https://github.com/vladmandic/automatic/tree/dev ui=dev 08:22:56-191172 INFO Latest published version: bab17a0b4f91b41c885f10262ef8c8e70ba72faa 2024-08-31T20:57:34Z 08:22:56-204169 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows release=Windows-10-10.0.22631-SP0 python=3.11.9

Relevant log output

PS C:\ai\automatic> .\webui.bat --medvram --debug
Using VENV: C:\ai\automatic\venv
08:22:55-179048 INFO     Starting SD.Next
08:22:55-182040 INFO     Logger: file="C:\ai\automatic\sdnext.log" level=DEBUG size=65 mode=create
08:22:55-184036 INFO     Python version=3.11.9 platform=Windows bin="C:\ai\automatic\venv\Scripts\python.exe"
                         venv="C:\ai\automatic\venv"
08:22:55-378508 INFO     Version: app=sd.next updated=2024-09-02 hash=bba17766 branch=dev
                         url=https://github.com/vladmandic/automatic/tree/dev ui=dev
08:22:56-191172 INFO     Latest published version: bab17a0b4f91b41c885f10262ef8c8e70ba72faa 2024-08-31T20:57:34Z
08:22:56-204169 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows
                         release=Windows-10-10.0.22631-SP0 python=3.11.9
08:22:56-206160 DEBUG    Setting environment tuning
08:22:56-207130 INFO     HF cache folder: C:\Users\sebas\.cache\huggingface\hub
08:22:56-209125 DEBUG    Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512"
08:22:56-219929 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
08:22:56-221134 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
08:22:56-232221 INFO     nVidia CUDA toolkit detected: nvidia-smi present
08:22:56-313452 WARNING  Modified files: ['models/Reference/playgroundai--playground-v2-1024px-aesthetic.jpg']
08:22:56-403214 INFO     Verifying requirements
08:22:56-407228 INFO     Verifying packages
08:22:56-450259 DEBUG    Repository update time: Tue Sep  3 01:08:32 2024
08:22:56-450821 INFO     Startup: standard
08:22:56-452117 INFO     Verifying submodules
08:22:59-888555 DEBUG    Git detached head detected: folder="extensions-builtin/sd-extension-chainner" reattach=main
08:22:59-889846 DEBUG    Submodule: extensions-builtin/sd-extension-chainner / main
08:23:00-009633 DEBUG    Git detached head detected: folder="extensions-builtin/sd-extension-system-info" reattach=main
08:23:00-011627 DEBUG    Submodule: extensions-builtin/sd-extension-system-info / main
08:23:00-132141 DEBUG    Git detached head detected: folder="extensions-builtin/sd-webui-agent-scheduler" reattach=main
08:23:00-133139 DEBUG    Submodule: extensions-builtin/sd-webui-agent-scheduler / main
08:23:00-301094 DEBUG    Git detached head detected: folder="extensions-builtin/sdnext-modernui" reattach=dev
08:23:00-302121 DEBUG    Submodule: extensions-builtin/sdnext-modernui / dev
08:23:00-444978 DEBUG    Git detached head detected: folder="extensions-builtin/stable-diffusion-webui-rembg"
                         reattach=master
08:23:00-445976 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
08:23:00-567390 DEBUG    Git detached head detected: folder="modules/k-diffusion" reattach=master
08:23:00-568388 DEBUG    Submodule: modules/k-diffusion / master
08:23:00-686117 DEBUG    Git detached head detected: folder="wiki" reattach=master
08:23:00-687142 DEBUG    Submodule: wiki / master
08:23:00-759576 DEBUG    Register paths
08:23:00-846371 DEBUG    Installed packages: 207
08:23:00-847608 DEBUG    Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
08:23:01-025059 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\sd-extension-system-info\install.py
08:23:01-393054 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
08:23:01-757647 DEBUG    Running extension installer: C:\ai\automatic\extensions-builtin\sd-webui-controlnet\install.py
08:23:02-266898 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py
08:23:02-632486 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
08:23:02-996279 DEBUG    Extensions all: []
08:23:02-997230 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
08:23:02-999228 INFO     Verifying requirements
08:23:03-000225 DEBUG    Setup complete without errors: 1725340983
08:23:03-006918 DEBUG    Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
08:23:03-007886 DEBUG    Starting module: <module 'webui' from 'C:\\ai\\automatic\\webui.py'>
08:23:03-008883 INFO     Command line args: ['--medvram', '--debug'] medvram=True debug=True
08:23:03-009881 DEBUG    Env flags: []
08:23:08-808284 INFO     Load packages: {'torch': '2.4.0+cu124', 'diffusers': '0.31.0.dev0', 'gradio': '3.43.2'}
08:23:09-836090 DEBUG    Read: file="config.json" json=35 bytes=1455 time=0.000
08:23:09-838085 DEBUG    Unknown settings: ['cross_attention_options']
08:23:09-840080 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product"
                         mode=no_grad
08:23:09-898922 INFO     Device: device=NVIDIA GeForce RTX 4080 n=1 arch=sm_90 cap=(8, 9) cuda=12.4 cudnn=90100
                         driver=560.81
08:23:09-900917 DEBUG    Read: file="html\reference.json" json=52 bytes=29118 time=0.001
08:23:10-300475 DEBUG    ONNX: version=1.19.0 provider=CUDAExecutionProvider, available=['AzureExecutionProvider',
                         'CPUExecutionProvider']
08:23:10-478064 DEBUG    Importing LDM
08:23:10-493584 DEBUG    Entering start sequence
08:23:10-496353 DEBUG    Initializing
08:23:10-521259 INFO     Available VAEs: path="models\VAE" items=0
08:23:10-523255 DEBUG    Available UNets: path="models\UNET" items=0
08:23:10-524901 DEBUG    Available T5s: path="models\T5" items=0
08:23:10-525871 INFO     Disabled extensions: ['sd-webui-controlnet', 'sdnext-modernui']
08:23:10-527865 DEBUG    Read: file="cache.json" json=2 bytes=10089 time=0.000
08:23:10-534874 DEBUG    Read: file="metadata.json" json=470 bytes=1620174 time=0.005
08:23:10-539834 DEBUG    Scanning diffusers cache: folder=models\Diffusers items=0 time=0.00
08:23:10-541198 INFO     Available models: path="models\Stable-diffusion" items=20 time=0.01
08:23:10-730721 DEBUG    Load extensions
08:23:10-778022 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py'
                         [2;36m08:23:10-775030[0m[2;36m [0m[34mINFO    [0m LoRA networks: [33mavailable[0m=[1;36m70[0m
                         [33mfolders[0m=[1;36m2[0m
08:23:11-151765 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
08:23:11-346834 DEBUG    Extensions init time: 0.61 sd-webui-agent-scheduler=0.33
                         stable-diffusion-webui-images-browser=0.18
08:23:11-359800 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.000
08:23:11-360797 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.000
08:23:11-362792 DEBUG    chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=8
08:23:11-364786 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="1x-ITF-SkinDiffDetail-Lite-v1"
                         path="models\ESRGAN\1x-ITF-SkinDiffDetail-Lite-v1.pth"
08:23:11-365783 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4xNMKDSuperscale_4xNMKDSuperscale"
                         path="models\ESRGAN\4xNMKDSuperscale_4xNMKDSuperscale.pth"
08:23:11-366781 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4x_NMKD-Siax_200k"
                         path="models\ESRGAN\4x_NMKD-Siax_200k.pth"
08:23:11-369772 DEBUG    Load upscalers: total=56 downloaded=11 user=3 time=0.02 ['None', 'Lanczos', 'Nearest',
                         'ChaiNNer', 'AuraSR', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
08:23:11-386728 DEBUG    Load styles: folder="models\styles" items=288 time=0.01
08:23:11-389720 DEBUG    Creating UI
08:23:11-390717 DEBUG    UI themes available: type=Standard themes=12
08:23:11-391893 INFO     UI theme: type=Standard name="black-teal"
08:23:11-399489 DEBUG    UI theme: css="C:\ai\automatic\javascript\black-teal.css" base="sdnext.css" user="None"
08:23:11-401485 DEBUG    UI initialize: txt2img
08:23:11-460851 DEBUG    Networks: page='model' items=71 subfolders=2 tab=txt2img folders=['models\\Stable-diffusion',
                         'models\\Diffusers', 'models\\Reference'] list=0.04 thumb=0.00 desc=0.01 info=0.00 workers=4
                         sort=Default
08:23:11-469245 DEBUG    Networks: page='lora' items=70 subfolders=0 tab=txt2img folders=['models\\Lora',
                         'models\\LyCORIS'] list=0.04 thumb=0.01 desc=0.02 info=0.02 workers=4 sort=Default
08:23:11-500190 DEBUG    Networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles', 'html']
                         list=0.04 thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
08:23:11-505597 DEBUG    Networks: page='embedding' items=13 subfolders=0 tab=txt2img folders=['models\\embeddings']
                         list=0.03 thumb=0.01 desc=0.00 info=0.00 workers=4 sort=Default
08:23:11-507560 DEBUG    Networks: page='vae' items=0 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.00
                         thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
08:23:11-581390 DEBUG    UI initialize: img2img
08:23:11-822779 DEBUG    UI initialize: control models=models\control
08:23:12-077976 DEBUG    Read: file="ui-config.json" json=0 bytes=2 time=0.000
08:23:12-178414 DEBUG    UI themes available: type=Standard themes=12
08:23:12-704712 DEBUG    Reading failed: C:\ai\automatic\html\extensions.json [Errno 2] No such file or directory:
                         'C:\\ai\\automatic\\html\\extensions.json'
08:23:12-705738 INFO     Extension list is empty: refresh required
08:23:13-287174 DEBUG    Extension list: processed=8 installed=8 enabled=6 disabled=2 visible=8 hidden=0
08:23:13-614300 DEBUG    Root paths: ['C:\\ai\\automatic']
08:23:13-690228 INFO     Local URL: http://127.0.0.1:7860/
08:23:13-691226 DEBUG    Gradio functions: registered=2361
08:23:13-692376 DEBUG    FastAPI middleware: ['Middleware', 'Middleware']
08:23:13-695397 DEBUG    Creating API
08:23:13-861417 INFO     [AgentScheduler] Task queue is empty
08:23:13-862105 INFO     [AgentScheduler] Registering APIs
08:23:13-980818 DEBUG    Scripts setup: ['IP Adapters:0.023', 'AnimateDiff:0.008', 'X/Y/Z Grid:0.011', 'Face:0.01']
08:23:13-982092 DEBUG    Model metadata: file="metadata.json" no changes
08:23:13-983377 DEBUG    Torch mode: deterministic=False
08:23:14-011305 INFO     Torch override VAE dtype: no-half set
08:23:14-012019 DEBUG    Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=True upscast=False
08:23:14-013044 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float32 unet=torch.float16
                         context=no_grad fp16=True bf16=None optimization=Scaled-Dot-Product
08:23:14-015013 DEBUG    Model requested: fn=<lambda>
08:23:14-016011 WARNING  Selected: model="Diffusers\Disty0/FLUX.1-dev-qint4 [82811df42b]" not found
08:23:14-017764 DEBUG    Script callback init time: image_browser.py:ui_tabs=0.39 system-info.py:app_started=0.06
                         task_scheduler.py:app_started=0.13
08:23:14-018792 INFO     Startup time: 11.01 torch=4.11 gradio=1.27 diffusers=0.41 libraries=1.67 extensions=0.61
                         face-restore=0.19 ui-en=0.21 ui-txt2img=0.05 ui-img2img=0.21 ui-control=0.11 ui-settings=0.23
                         ui-extensions=1.02 ui-defaults=0.25 launch=0.13 api=0.09 app-started=0.20
08:23:14-020790 DEBUG    Save: file="config.json" json=35 bytes=1408 time=0.003
08:23:14-022783 DEBUG    Unused settings: ['cross_attention_options']
08:24:00-035737 DEBUG    Server: alive=True jobs=1 requests=1 uptime=51 memory=0.98/63.92 backend=Backend.DIFFUSERS
                         state=idle
08:26:00-077613 DEBUG    Server: alive=True jobs=1 requests=1 uptime=171 memory=0.98/63.92 backend=Backend.DIFFUSERS
                         state=idle
08:26:28-669831 INFO     MOTD: N/A
08:26:31-238331 DEBUG    UI themes available: type=Standard themes=12
08:26:31-427893 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)
                         AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36 Edg/128.0.0.0
08:26:46-170132 DEBUG    Reference: download="Disty0/FLUX.1-dev-qint4"
08:26:46-172164 DEBUG    Diffusers downloading: id="Disty0/FLUX.1-dev-qint4" args={'force_download': False,
                         'resume_download': True, 'cache_dir': 'models\\Diffusers', 'load_connected_pipeline': True}
model_index.json: 100%|███████████████████████████████████████████████████████████████████████| 536/536 [00:00<?, ?B/s]
text_encoder/config.json: 100%|███████████████████████████████████████████████████████████████| 613/613 [00:00<?, ?B/s]
scheduler/scheduler_config.json: 100%|████████████████████████████████████████████████████████| 273/273 [00:00<?, ?B/s]
text_encoder_2/config.json: 100%|█████████████████████████████████████████████████████████████| 909/909 [00:00<?, ?B/s]
tokenizer/tokenizer_config.json: 100%|████████████████████████████████████████████████████████| 705/705 [00:00<?, ?B/s]
tokenizer_2/special_tokens_map.json: 100%|████████████████████████████████████████████████| 2.54k/2.54k [00:00<?, ?B/s]
tokenizer/special_tokens_map.json: 100%|██████████████████████████████████████████████████████| 588/588 [00:00<?, ?B/s]
transformer/config.json: 100%|████████████████████████████████████████████████████████████████| 535/535 [00:00<?, ?B/s]
tokenizer_2/tokenizer_config.json: 100%|██████████████████████████████████████████████████| 20.8k/20.8k [00:00<?, ?B/s]
spiece.model: 100%|█████████████████████████████████████████████████████████████████| 792k/792k [00:00<00:00, 10.5MB/s]
tokenizer/merges.txt: 100%|█████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 1.23MB/s]
tokenizer/vocab.json: 100%|███████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 1.99MB/s]
vae/config.json: 100%|████████████████████████████████████████████████████████████████████████| 820/820 [00:00<?, ?B/s]
tokenizer_2/tokenizer.json: 100%|█████████████████████████████████████████████████| 2.42M/2.42M [00:00<00:00, 3.14MB/s]
diffusion_pytorch_model.safetensors: 100%|██████████████████████████████████████████| 168M/168M [00:05<00:00, 33.0MB/s]
model.safetensors: 100%|████████████████████████████████████████████████████████████| 246M/246M [00:08<00:00, 29.7MB/s]
model.safetensors: 100%|██████████████████████████████████████████████████████████| 2.72G/2.72G [00:49<00:00, 54.9MB/s]
Fetching 18 files:  33%|█████████████████████▎                                          | 6/18 [00:50<02:11, 10.95s/it]08:28:00-206307 DEBUG    Server: alive=True jobs=2 requests=71 uptime=291 memory=1.01/63.92 backend=Backend.DIFFUSERS
                         state=job="HuggingFace" 0/-1
diffusion_pytorch_model.safetensors: 100%|████████████████████████████████████████| 6.33G/6.33G [01:47<00:00, 58.8MB/s]
Fetching 18 files: 100%|███████████████████████████████████████████████████████████████| 18/18 [01:49<00:00,  6.09s/it]
08:28:36-991476 DEBUG    Save:
                         file="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb\model_info.json" json=12 bytes=214 time=0.001
08:28:36-993470 DEBUG    Reference download complete: model="Disty0/FLUX.1-dev-qint4"
08:28:36-998455 DEBUG    Scanning diffusers cache: folder=models\Diffusers items=1 time=0.00
08:28:37-000452 INFO     Available models: path="models\Stable-diffusion" items=21 time=0.01
08:28:37-481165 INFO     Select: model="Diffusers\Disty0/FLUX.1-dev-qint4 [82811df42b]"
08:28:37-483160 DEBUG    Load model: existing=False
                         target=models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d
                         5170c306a6eb info=None
08:28:37-485182 DEBUG    Diffusers loading:
                         path="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb"
08:28:37-486152 INFO     Autodetect: model="FLUX" class=FluxPipeline
                         file="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb" size=0MB
08:28:37-504103 DEBUG    Loading FLUX: model="Diffusers\Disty0/FLUX.1-dev-qint4" unet="None" t5="None" vae="None"
                         quant=qint4 offload=model dtype=torch.float16
08:28:37-505100 INFO     Install: package="optimum-quanto" mode=pip
08:28:37-506099 DEBUG    Running: pip="install --upgrade optimum-quanto"
08:28:42-532859 ERROR    Loading FLUX: Failed to load Quanto transformer: Repo id must use alphanumeric chars or '-',
                         '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is
                         96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
08:28:42-533856 ERROR    FLUX Quanto:: HFValidationError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\model_flux.py:50 in load_flux_quanto                                                         │
│                                                                                                                      │
│    49 │   │   │   repo_id = checkpoint_info.name.replace('Diffusers/', '')                                           │
│ ❱  50 │   │   │   quantization_map = hf_hub_download(repo_id, subfolder='transformer', filename='quantization_map.js │
│    51 │   │   with open(quantization_map, "r", encoding='utf8') as f:                                                │
│                                                                                                                      │
│ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_deprecation.py:101 in inner_f                          │
│                                                                                                                      │
│   100 │   │   │   │   warnings.warn(message, FutureWarning)                                                          │
│ ❱ 101 │   │   │   return f(*args, **kwargs)                                                                          │
│   102                                                                                                                │
│                                                                                                                      │
│ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:106 in _inner_fn                         │
│                                                                                                                      │
│   105 │   │   │   if arg_name in ["repo_id", "from_id", "to_id"]:                                                    │
│ ❱ 106 │   │   │   │   validate_repo_id(arg_value)                                                                    │
│   107                                                                                                                │
│                                                                                                                      │
│ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:160 in validate_repo_id                  │
│                                                                                                                      │
│   159 │   if not REPO_ID_REGEX.match(repo_id):                                                                       │
│ ❱ 160 │   │   raise HFValidationError(                                                                               │
│   161 │   │   │   "Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are"                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
08:28:42-581912 ERROR    Loading FLUX: Failed to load Quanto text encoder: Repo id must use alphanumeric chars or '-',
                         '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is
                         96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
08:28:42-583450 ERROR    FLUX Quanto:: HFValidationError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\model_flux.py:74 in load_flux_quanto                                                         │
│                                                                                                                      │
│    73 │   │   │   repo_id = checkpoint_info.name.replace('Diffusers/', '')                                           │
│ ❱  74 │   │   │   quantization_map = hf_hub_download(repo_id, subfolder='text_encoder_2', filename='quantization_map │
│    75 │   │   with open(quantization_map, "r", encoding='utf8') as f:                                                │
│                                                                                                                      │
│ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_deprecation.py:101 in inner_f                          │
│                                                                                                                      │
│   100 │   │   │   │   warnings.warn(message, FutureWarning)                                                          │
│ ❱ 101 │   │   │   return f(*args, **kwargs)                                                                          │
│   102                                                                                                                │
│                                                                                                                      │
│ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:106 in _inner_fn                         │
│                                                                                                                      │
│   105 │   │   │   if arg_name in ["repo_id", "from_id", "to_id"]:                                                    │
│ ❱ 106 │   │   │   │   validate_repo_id(arg_value)                                                                    │
│   107                                                                                                                │
│                                                                                                                      │
│ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:160 in validate_repo_id                  │
│                                                                                                                      │
│   159 │   if not REPO_ID_REGEX.match(repo_id):                                                                       │
│ ❱ 160 │   │   raise HFValidationError(                                                                               │
│   161 │   │   │   "Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are"                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
08:28:42-606031 DEBUG    Loading FLUX: preloaded=[]
model_index.json: 100%|████████████████████████████████████████████████████████████████| 536/536 [00:00<00:00, 537kB/s]
text_encoder/config.json: 100%|███████████████████████████████████████████████████████████████| 613/613 [00:00<?, ?B/s]
(…)t_encoder_2/model.safetensors.index.json: 100%|████████████████████████████████| 19.9k/19.9k [00:00<00:00, 19.9MB/s]
scheduler/scheduler_config.json: 100%|████████████████████████████████████████████████████████| 273/273 [00:00<?, ?B/s]
tokenizer/special_tokens_map.json: 100%|██████████████████████████████████████████████████████| 588/588 [00:00<?, ?B/s]
tokenizer/tokenizer_config.json: 100%|████████████████████████████████████████████████████████| 705/705 [00:00<?, ?B/s]
text_encoder_2/config.json: 100%|██████████████████████████████████████████████████████| 782/782 [00:00<00:00, 783kB/s]
tokenizer_2/special_tokens_map.json: 100%|████████████████████████████████████████████████| 2.54k/2.54k [00:00<?, ?B/s]
tokenizer/merges.txt: 100%|█████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 1.28MB/s]
tokenizer/vocab.json: 100%|███████████████████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 2.72MB/s]
spiece.model: 100%|█████████████████████████████████████████████████████████████████| 792k/792k [00:00<00:00, 15.6MB/s]
tokenizer_2/tokenizer_config.json: 100%|██████████████████████████████████████████████████| 20.8k/20.8k [00:00<?, ?B/s]
transformer/config.json: 100%|████████████████████████████████████████████████████████████████| 378/378 [00:00<?, ?B/s]
(…)ion_pytorch_model.safetensors.index.json: 100%|███████████████████████████████████| 121k/121k [00:00<00:00, 868kB/s]
tokenizer_2/tokenizer.json: 100%|█████████████████████████████████████████████████| 2.42M/2.42M [00:00<00:00, 3.32MB/s]
vae/config.json: 100%|████████████████████████████████████████████████████████████████████████| 820/820 [00:00<?, ?B/s]
diffusion_pytorch_model.safetensors: 100%|██████████████████████████████████████████| 168M/168M [00:10<00:00, 16.1MB/s]
model.safetensors: 100%|████████████████████████████████████████████████████████████| 246M/246M [00:14<00:00, 16.5MB/s]
Fetching 23 files:  17%|███████████▏                                                    | 4/23 [00:17<01:34,  4.96s/it]08:29:59-948363 DEBUG    Server: alive=True jobs=2 requests=84 uptime=411 memory=1.09/63.92 backend=Backend.DIFFUSERS
                         state=idle
model-00001-of-00002.safetensors:   5%|██▍                                         | 273M/4.99G [00:14<03:58, 19.8MB/s]08:31:59-987827 DEBUG    Server: alive=True jobs=2 requests=90 uptime=531 memory=1.08/63.92 backend=Backend.DIFFUSERS
                         state=idle
(…)pytorch_model-00003-of-00003.safetensors: 100%|████████████████████████████████| 3.87G/3.87G [04:41<00:00, 13.8MB/s]
model-00002-of-00002.safetensors: 100%|███████████████████████████████████████████| 4.53G/4.53G [04:48<00:00, 15.7MB/s]
model-00001-of-00002.safetensors: 100%|███████████████████████████████████████████| 4.99G/4.99G [05:03<00:00, 16.5MB/s]
Fetching 23 files:  26%|████████████████▋                                               | 6/23 [05:06<19:28, 68.75s/it]08:34:00-025220 DEBUG    Server: alive=True jobs=2 requests=107 uptime=651 memory=1.04/63.92 backend=Backend.DIFFUSERS
                         state=idle
model-00001-of-00002.safetensors: 100%|██████████████████████████████████████████▉| 4.99G/4.99G [05:03<00:00, 46.4MB/s]08:35:59-973116 DEBUG    Server: alive=True jobs=2 requests=113 uptime=771 memory=1.04/63.92 backend=Backend.DIFFUSERS
                         state=idle
(…)pytorch_model-00002-of-00003.safetensors: 100%|████████████████████████████████| 9.95G/9.95G [07:49<00:00, 21.2MB/s]
(…)pytorch_model-00001-of-00003.safetensors: 100%|████████████████████████████████| 9.98G/9.98G [07:53<00:00, 21.1MB/s]
Fetching 23 files: 100%|███████████████████████████████████████████████████████████████| 23/23 [07:55<00:00, 20.69s/it]

Loading checkpoint shards:   0%|                                                                 | 0/2 [00:00<?, ?it/s]
[A

Loading checkpoint shards:  50%|############################5                            | 1/2 [00:04<00:04,  4.77s/it]
[A

Loading checkpoint shards: 100%|#########################################################| 2/2 [00:09<00:00,  4.49s/it]
[A
Loading checkpoint shards: 100%|#########################################################| 2/2 [00:09<00:00,  4.54s/it]

Loading pipeline components... 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7/7  [ 0:01:02 < 0:00:00 , 2 C/s ]
08:37:41-431087 INFO     Load embeddings: loaded=0 skipped=13 time=0.04
08:37:41-506912 DEBUG    Setting model VAE: no-half=True
08:37:41-508610 DEBUG    Setting model: slicing=True
08:37:41-509234 DEBUG    Setting model: tiling=True
08:37:41-510233 DEBUG    Setting model: attention=Scaled-Dot-Product
08:37:41-535529 DEBUG    Setting model: offload=model
08:37:41-773417 DEBUG    GC: utilization={'gpu': 8, 'ram': 54, 'threshold': 80} gc={'collected': 304, 'saved': 0.0}
                         before={'gpu': 1.33, 'ram': 34.57} after={'gpu': 1.33, 'ram': 34.57, 'retries': 0, 'oom': 0}
                         device=cuda fn=load_diffuser time=0.21
08:37:41-775316 INFO     Load model: time=544.08 load=543.90 options=0.10 native=1024 {'ram': {'used': 34.57, 'total':
                         63.92}, 'gpu': {'used': 1.33, 'total': 15.99}, 'retries': 0, 'oom': 0}
08:37:41-777591 DEBUG    Setting changed: sd_model_checkpoint=Disty0/FLUX.1-dev-qint4 progress=True
08:37:41-780607 DEBUG    Save: file="config.json" json=35 bytes=1408 time=0.003
08:37:41-781582 DEBUG    Unused settings: ['cross_attention_options']
08:38:00-051702 DEBUG    Server: alive=True jobs=2 requests=121 uptime=891 memory=34.57/63.92 backend=Backend.DIFFUSERS
                         state=idle
08:39:05-621128 INFO     Base: class=FluxPipeline
08:39:05-624120 DEBUG    Sampler default FlowMatchEulerDiscreteScheduler: {'num_train_timesteps': 1000, 'shift': 3.0,
                         'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256,
                         'max_image_seq_len': 4096}
08:39:05-656036 DEBUG    Torch generator: device=cuda seeds=[3163949593]
08:39:05-657033 DEBUG    Diffuser pipeline: FluxPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt':
                         1, 'guidance_scale': 6, 'num_inference_steps': 20, 'output_type': 'latent', 'width': 1024,
                         'height': 1024, 'parser': 'Fixed attention'}
Progress ?it/s                                              0% 0/20 00:00 ? Base

Backend

Diffusers

UI

Standard

Branch

Dev

Model

Other

Acknowledgements

vladmandic commented 2 months ago

do not download random zip files like that.

vladmandic commented 2 months ago

this is fixed in dev branch, service release will be soon.

SAC020 commented 2 months ago

this is fixed in dev branch, service release will be soon.

I don't know what a "service release" is, I am on dev branch and it doesn't seem fixed. Can you please share any specific timeline indication so I don't "git pull" every 10 minutes? :)

TY!

vladmandic commented 2 months ago

reopening. set env variable SD_LOAD_DEBUG=true, reproduce the issue and upload the new log - it will contain more information.

SAC020 commented 2 months ago

reopening. set env variable SD_LOAD_DEBUG=true, reproduce the issue and upload the new log - it will contain more information.

ok, how do I set env variable SD_LOAD_DEBUG=true?

vladmandic commented 2 months ago
set SD_LOAD_DEBUG=true
.\webui.bat --medvram --debug
SAC020 commented 2 months ago

`PS C:\ai\automatic> set SD_LOAD_DEBUG=true PS C:\ai\automatic> .\webui.bat --medvram --debug Using VENV: C:\ai\automatic\venv 19:49:35-619703 INFO Starting SD.Next 19:49:35-622696 INFO Logger: file="C:\ai\automatic\sdnext.log" level=DEBUG size=65 mode=create 19:49:35-623953 INFO Python version=3.11.9 platform=Windows bin="C:\ai\automatic\venv\Scripts\python.exe" venv="C:\ai\automatic\venv" 19:49:35-828245 INFO Version: app=sd.next updated=2024-09-03 hash=f9dcff6d branch=dev url=https://github.com/vladmandic/automatic/tree/dev ui=dev 19:49:36-666900 INFO Latest published version: bab17a0b4f91b41c885f10262ef8c8e70ba72faa 2024-08-31T20:57:34Z 19:49:36-681662 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows release=Windows-10-10.0.22631-SP0 python=3.11.9 19:49:36-683856 DEBUG Setting environment tuning 19:49:36-684856 INFO HF cache folder: C:\Users\sebas.cache\huggingface\hub 19:49:36-685881 DEBUG Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512" 19:49:36-697378 DEBUG Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False 19:49:36-698375 DEBUG Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True 19:49:36-709863 INFO nVidia CUDA toolkit detected: nvidia-smi present 19:49:36-792917 WARNING Modified files: ['models/Reference/playgroundai--playground-v2-1024px-aesthetic.jpg'] 19:49:36-891363 INFO Verifying requirements 19:49:36-894873 INFO Verifying packages 19:49:36-937831 DEBUG Repository update time: Tue Sep 3 17:45:51 2024 19:49:36-938828 INFO Startup: standard 19:49:36-939826 INFO Verifying submodules 19:49:40-052826 DEBUG Git detached head detected: folder="extensions-builtin/sd-extension-chainner" reattach=main 19:49:40-053833 DEBUG Submodule: extensions-builtin/sd-extension-chainner / main 19:49:40-180390 DEBUG Git detached head detected: folder="extensions-builtin/sd-extension-system-info" reattach=main 19:49:40-181388 DEBUG Submodule: extensions-builtin/sd-extension-system-info / main 19:49:40-309297 DEBUG Git detached head detected: folder="extensions-builtin/sd-webui-agent-scheduler" reattach=main 19:49:40-310267 DEBUG Submodule: extensions-builtin/sd-webui-agent-scheduler / main 19:49:40-487775 DEBUG Git detached head detected: folder="extensions-builtin/sdnext-modernui" reattach=dev 19:49:40-488773 DEBUG Submodule: extensions-builtin/sdnext-modernui / dev 19:49:40-643496 DEBUG Git detached head detected: folder="extensions-builtin/stable-diffusion-webui-rembg" reattach=master 19:49:40-643496 DEBUG Submodule: extensions-builtin/stable-diffusion-webui-rembg / master 19:49:40-773368 DEBUG Git detached head detected: folder="modules/k-diffusion" reattach=master 19:49:40-774455 DEBUG Submodule: modules/k-diffusion / master 19:49:40-896987 DEBUG Git detached head detected: folder="wiki" reattach=master 19:49:40-898983 DEBUG Submodule: wiki / master 19:49:40-974823 DEBUG Register paths 19:49:41-071990 DEBUG Installed packages: 209 19:49:41-074109 DEBUG Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg'] 19:49:41-259634 DEBUG Running extension installer: C:\ai\automatic\extensions-builtin\sd-extension-system-info\install.py 19:49:41-639342 DEBUG Running extension installer: C:\ai\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py 19:49:42-020350 DEBUG Running extension installer: C:\ai\automatic\extensions-builtin\sd-webui-controlnet\install.py 19:49:42-561136 DEBUG Running extension installer: C:\ai\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py 19:49:42-946570 DEBUG Running extension installer: C:\ai\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py 19:49:43-333996 DEBUG Extensions all: [] 19:49:43-334992 INFO Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg'] 19:49:43-336987 INFO Verifying requirements 19:49:43-336987 DEBUG Setup complete without errors: 1725382183 19:49:43-344402 DEBUG Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0} 19:49:43-345399 DEBUG Starting module: <module 'webui' from 'C:\ai\automatic\webui.py'> 19:49:43-346397 INFO Command line args: ['--medvram', '--debug'] medvram=True debug=True 19:49:43-347394 DEBUG Env flags: [] 19:49:49-563062 INFO Load packages: {'torch': '2.4.0+cu124', 'diffusers': '0.31.0.dev0', 'gradio': '3.43.2'} 19:49:50-753147 DEBUG Read: file="config.json" json=35 bytes=1503 time=0.000 19:49:50-755191 DEBUG Unknown settings: ['cross_attention_options'] 19:49:50-758183 INFO Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product" mode=no_grad 19:49:50-823069 INFO Device: device=NVIDIA GeForce RTX 4080 n=1 arch=sm_90 cap=(8, 9) cuda=12.4 cudnn=90100 driver=560.81 19:49:50-826483 DEBUG Read: file="html\reference.json" json=52 bytes=29118 time=0.001 19:49:51-270287 DEBUG ONNX: version=1.19.0 provider=CUDAExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider'] 19:49:51-476149 DEBUG Importing LDM 19:49:51-495196 DEBUG Entering start sequence 19:49:51-498187 DEBUG Initializing 19:49:51-525671 INFO Available VAEs: path="models\VAE" items=0 19:49:51-527666 DEBUG Available UNets: path="models\UNET" items=0 19:49:51-529660 DEBUG Available T5s: path="models\T5" items=0 19:49:51-530657 INFO Disabled extensions: ['sd-webui-controlnet', 'sdnext-modernui'] 19:49:51-533159 DEBUG Read: file="cache.json" json=2 bytes=10089 time=0.000 19:49:51-541147 DEBUG Read: file="metadata.json" json=470 bytes=1620174 time=0.007 19:49:51-547536 DEBUG Scanning diffusers cache: folder=models\Diffusers items=2 time=0.00 19:49:51-549531 INFO Available models: path="models\Stable-diffusion" items=21 time=0.02 19:49:51-774650 DEBUG Load extensions 19:49:51-828606 INFO Extension: script='extensions-builtin\Lora\scripts\lora_script.py' [2;36m19:49:51-825613[0m[2;36m [0m[34mINFO [0m LoRA networks: [33mavailable[0m=[1;36m70[0m [33mfolders[0m=[1;36m3[0m 19:49:52-266436 INFO Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3 19:49:52-489808 DEBUG Extensions init time: 0.71 sd-webui-agent-scheduler=0.39 stable-diffusion-webui-images-browser=0.21 19:49:52-503249 DEBUG Read: file="html/upscalers.json" json=4 bytes=2672 time=0.000 19:49:52-505257 DEBUG Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.000 19:49:52-507251 DEBUG chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=8 19:49:52-509245 DEBUG Upscaler type=ESRGAN folder="models\ESRGAN" model="1x-ITF-SkinDiffDetail-Lite-v1" path="models\ESRGAN\1x-ITF-SkinDiffDetail-Lite-v1.pth" 19:49:52-510242 DEBUG Upscaler type=ESRGAN folder="models\ESRGAN" model="4xNMKDSuperscale_4xNMKDSuperscale" path="models\ESRGAN\4xNMKDSuperscale_4xNMKDSuperscale.pth" 19:49:52-511241 DEBUG Upscaler type=ESRGAN folder="models\ESRGAN" model="4x_NMKD-Siax_200k" path="models\ESRGAN\4x_NMKD-Siax_200k.pth" 19:49:52-515419 DEBUG Load upscalers: total=56 downloaded=11 user=3 time=0.02 ['None', 'Lanczos', 'Nearest', 'ChaiNNer', 'AuraSR', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR'] 19:49:52-533908 DEBUG Load styles: folder="models\styles" items=288 time=0.02 19:49:52-537126 DEBUG Creating UI 19:49:52-538124 DEBUG UI themes available: type=Standard themes=12 19:49:52-539121 INFO UI theme: type=Standard name="black-teal" 19:49:52-547608 DEBUG UI theme: css="C:\ai\automatic\javascript\black-teal.css" base="sdnext.css" user="None" 19:49:52-549602 DEBUG UI initialize: txt2img 19:49:52-615207 DEBUG Networks: page='model' items=72 subfolders=2 tab=txt2img folders=['models\Stable-diffusion', 'models\Diffusers', 'models\Reference'] list=0.05 thumb=0.01 desc=0.00 info=0.00 workers=4 sort=Default 19:49:52-624200 DEBUG Networks: page='lora' items=70 subfolders=0 tab=txt2img folders=['models\Lora', 'models\LyCORIS'] list=0.05 thumb=0.01 desc=0.02 info=0.03 workers=4 sort=Default 19:49:52-657757 DEBUG Networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\styles', 'html'] list=0.04 thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default 19:49:52-663257 DEBUG Networks: page='embedding' items=13 subfolders=0 tab=txt2img folders=['models\embeddings'] list=0.03 thumb=0.01 desc=0.00 info=0.00 workers=4 sort=Default 19:49:52-665270 DEBUG Networks: page='vae' items=0 subfolders=0 tab=txt2img folders=['models\VAE'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default 19:49:52-750098 DEBUG UI initialize: img2img 19:49:53-026508 DEBUG UI initialize: control models=models\control 19:49:53-318592 DEBUG Read: file="ui-config.json" json=0 bytes=2 time=0.000 19:49:53-423146 DEBUG UI themes available: type=Standard themes=12 19:49:54-006627 DEBUG Reading failed: C:\ai\automatic\html\extensions.json [Errno 2] No such file or directory: 'C:\ai\automatic\html\extensions.json' 19:49:54-007624 INFO Extension list is empty: refresh required 19:49:54-669288 DEBUG Extension list: processed=8 installed=8 enabled=6 disabled=2 visible=8 hidden=0 19:49:55-037787 DEBUG Root paths: ['C:\ai\automatic'] 19:49:55-120858 INFO Local URL: http://127.0.0.1:7860/ 19:49:55-121855 DEBUG Gradio functions: registered=2364 19:49:55-123361 DEBUG FastAPI middleware: ['Middleware', 'Middleware'] 19:49:55-126652 DEBUG Creating API 19:49:55-306381 INFO [AgentScheduler] Task queue is empty 19:49:55-307355 INFO [AgentScheduler] Registering APIs 19:49:55-438287 DEBUG Scripts setup: ['IP Adapters:0.025', 'AnimateDiff:0.009', 'X/Y/Z Grid:0.011', 'Face:0.013', 'Image-to-Video:0.006'] 19:49:55-439285 DEBUG Model metadata: file="metadata.json" no changes 19:49:55-441280 DEBUG Torch mode: deterministic=False 19:49:55-476612 INFO Torch override VAE dtype: no-half set 19:49:55-477609 DEBUG Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=True upscast=False 19:49:55-478607 INFO Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float32 unet=torch.float16 context=no_grad fp16=True bf16=None optimization=Scaled-Dot-Product 19:49:55-480602 DEBUG Model requested: fn= 19:49:55-481601 INFO Select: model="epicrealismXL_v8Kiss [04639d4084]" 19:49:55-484118 DEBUG Load model: existing=False target=C:\ai\automatic\models\Stable-diffusion\epicrealismXL_v8Kiss.safetensors info=None 19:49:55-485593 DEBUG Diffusers loading: path="C:\ai\automatic\models\Stable-diffusion\epicrealismXL_v8Kiss.safetensors" 19:49:55-486590 INFO Autodetect: model="Stable Diffusion XL" class=StableDiffusionXLPipeline file="C:\ai\automatic\models\Stable-diffusion\epicrealismXL_v8Kiss.safetensors" size=6617MB Fetching 17 files: 100%|███████████████████████████████████████████████████████████████████████| 17/17 [00:00<?, ?it/s] Loading pipeline components... 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7/7 [ 0:00:01 < 0:00:00 , 6 C/s ] 19:49:57-388332 DEBUG Setting model: pipeline=StableDiffusionXLPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'extract_ema': False, 'use_safetensors': True, 'cache_dir': 'C:\Users\sebas\.cache\huggingface\hub'} 19:50:01-105555 INFO Load embeddings: loaded=2 skipped=11 time=3.72 19:50:01-345807 DEBUG Setting model VAE: no-half=True 19:50:01-346805 DEBUG Setting model: slicing=True 19:50:01-347802 DEBUG Setting model: tiling=True 19:50:01-348799 DEBUG Setting model: attention=Scaled-Dot-Product 19:50:01-369781 DEBUG Setting model: offload=model 19:50:01-408992 DEBUG Read: file="C:\ai\automatic\configs\sdxl\vae\config.json" json=15 bytes=674 time=0.000 19:50:01-646005 DEBUG GC: utilization={'gpu': 8, 'ram': 4, 'threshold': 80} gc={'collected': 305, 'saved': 0.0} before={'gpu': 1.33, 'ram': 2.48} after={'gpu': 1.33, 'ram': 2.48, 'retries': 0, 'oom': 0} device=cuda fn=load_diffuser time=0.23 19:50:01-656505 INFO Load model: time=5.93 load=1.90 embeddings=3.72 options=0.26 native=1024 {'ram': {'used': 2.48, 'total': 63.92}, 'gpu': {'used': 1.33, 'total': 15.99}, 'retries': 0, 'oom': 0} 19:50:01-659497 DEBUG Script callback init time: image_browser.py:ui_tabs=0.43 system-info.py:app_started=0.07 task_scheduler.py:app_started=0.15 19:50:01-661491 DEBUG Save: file="config.json" json=35 bytes=1456 time=0.002 19:50:01-661491 INFO Startup time: 18.31 torch=4.34 gradio=1.42 diffusers=0.46 libraries=1.91 extensions=0.71 face-restore=0.23 ui-en=0.23 ui-txt2img=0.06 ui-img2img=0.24 ui-control=0.12 ui-settings=0.25 ui-extensions=1.14 ui-defaults=0.29 launch=0.14 api=0.10 app-started=0.21 checkpoint=6.22 19:50:01-664988 DEBUG Unused settings: ['cross_attention_options'] 19:51:32-202282 INFO Select: model="Diffusers\Disty0/FLUX.1-dev-qint4 [82811df42b]" 19:51:32-204276 DEBUG Load model: existing=False target=models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d 5170c306a6eb info=None 19:51:32-585594 DEBUG GC: utilization={'gpu': 8, 'ram': 3, 'threshold': 80} gc={'collected': 381, 'saved': 0.0} before={'gpu': 1.33, 'ram': 1.83} after={'gpu': 1.33, 'ram': 1.83, 'retries': 0, 'oom': 0} device=cuda fn=unload_modelweights time=0.24 19:51:32-587593 DEBUG Unload weights model: {'ram': {'used': 1.83, 'total': 63.92}, 'gpu': {'used': 1.33, 'total': 15.99}, 'retries': 0, 'oom': 0} 19:51:32-596091 DEBUG Diffusers loading: path="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5 170c306a6eb" 19:51:32-598088 INFO Autodetect: model="FLUX" class=FluxPipeline file="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5 170c306a6eb" size=0MB 19:51:32-604071 DEBUG Loading FLUX: model="Diffusers\Disty0/FLUX.1-dev-qint4" unet="None" t5="None" vae="None" quant=qint4 offload=model dtype=torch.float16 19:51:33-156265 ERROR Loading FLUX: Failed to load Quanto transformer: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'. 19:51:33-158259 ERROR FLUX Quanto:: HFValidationError ╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮ │ C:\ai\automatic\modules\model_flux.py:50 in load_flux_quanto │ │ │ │ 49 │ │ │ repo_id = checkpoint_info.name.replace('Diffusers/', '') │ │ ❱ 50 │ │ │ quantization_map = hf_hub_download(repo_id, subfolder='transformer', filename='quantization_map.js │ │ 51 │ │ with open(quantization_map, "r", encoding='utf8') as f: │ │ │ │ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils_deprecation.py:101 in inner_f │ │ │ │ 100 │ │ │ │ warnings.warn(message, FutureWarning) │ │ ❱ 101 │ │ │ return f(*args, *kwargs) │ │ 102 │ │ │ │ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils_validators.py:106 in _inner_fn │ │ │ │ 105 │ │ │ if arg_name in ["repo_id", "from_id", "to_id"]: │ │ ❱ 106 │ │ │ │ validate_repo_id(arg_value) │ │ 107 │ │ │ │ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils_validators.py:160 in validate_repo_id │ │ │ │ 159 │ if not REPO_ID_REGEX.match(repoid): │ │ ❱ 160 │ │ raise HFValidationError( │ │ 161 │ │ │ "Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are" │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ HFValidationError: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'. 19:51:33-219179 ERROR Loading FLUX: Failed to load Quanto text encoder: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'. 19:51:33-220176 ERROR FLUX Quanto:: HFValidationError ╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮ │ C:\ai\automatic\modules\model_flux.py:74 in load_flux_quanto │ │ │ │ 73 │ │ │ repo_id = checkpoint_info.name.replace('Diffusers/', '') │ │ ❱ 74 │ │ │ quantization_map = hf_hub_download(repo_id, subfolder='text_encoder_2', filename='quantization_map │ │ 75 │ │ with open(quantization_map, "r", encoding='utf8') as f: │ │ │ │ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils_deprecation.py:101 in inner_f │ │ │ │ 100 │ │ │ │ warnings.warn(message, FutureWarning) │ │ ❱ 101 │ │ │ return f(args, **kwargs) │ │ 102 │ │ │ │ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils_validators.py:106 in _inner_fn │ │ │ │ 105 │ │ │ if arg_name in ["repo_id", "from_id", "to_id"]: │ │ ❱ 106 │ │ │ │ validate_repo_id(arg_value) │ │ 107 │ │ │ │ C:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils_validators.py:160 in validate_repo_id │ │ │ │ 159 │ if not REPO_ID_REGEX.match(repoid): │ │ ❱ 160 │ │ raise HFValidationError( │ │ 161 │ │ │ "Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are" │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'. 19:51:33-244349 DEBUG Loading FLUX: preloaded=[] Loading pipeline components... 43% ━━━━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3/7 [ 0:00:25 < 0:00:01 , 5 C/s ]19:51:59-766344 DEBUG Server: alive=True jobs=1 requests=7 uptime=130 memory=16.78/63.92 backend=Backend.DIFFUSERS state=idle

Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] [A

Loading checkpoint shards: 50%|############################5 | 1/2 [00:09<00:09, 9.61s/it] [A

Loading checkpoint shards: 100%|#########################################################| 2/2 [00:18<00:00, 9.05s/it] [A Loading checkpoint shards: 100%|#########################################################| 2/2 [00:18<00:00, 9.14s/it]

Loading pipeline components... 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7/7 [ 0:01:05 < 0:00:00 , 0 C/s ] 19:52:39-336579 INFO Load embeddings: loaded=0 skipped=13 time=0.03 19:52:39-426564 DEBUG Setting model VAE: no-half=True 19:52:39-428818 DEBUG Setting model: slicing=True 19:52:39-429815 DEBUG Setting model: tiling=True 19:52:39-430812 DEBUG Setting model: attention=Scaled-Dot-Product 19:52:39-448413 DEBUG Setting model: offload=model 19:52:39-742676 DEBUG GC: utilization={'gpu': 8, 'ram': 54, 'threshold': 80} gc={'collected': 304, 'saved': 0.0} before={'gpu': 1.33, 'ram': 34.68} after={'gpu': 1.33, 'ram': 34.68, 'retries': 0, 'oom': 0} device=cuda fn=load_diffuser time=0.26 19:52:39-744670 INFO Load model: time=66.88 load=66.71 options=0.11 native=1024 {'ram': {'used': 34.68, 'total': 63.92}, 'gpu': {'used': 1.33, 'total': 15.99}, 'retries': 0, 'oom': 0} 19:52:39-746665 DEBUG Setting changed: sd_model_checkpoint=Diffusers\Disty0/FLUX.1-dev-qint4 [82811df42b] progress=True 19:52:39-749023 DEBUG Save: file="config.json" json=35 bytes=1408 time=0.002 19:52:39-751017 DEBUG Unused settings: ['cross_attention_options'] `

vladmandic commented 2 months ago

ah, powershell, not cmd....great... try

[System.Environment]::SetEnvironmentVariable('SD_LOAD_DEBUG','true')
.\webui.bat --medvram --debug

or just use set as i've mentioned above from cmd.exe

SAC020 commented 2 months ago

This is with CMD. It seems to set the variable but for myself I don't see any additional information. I hope I'm wrong and you see something relevant

_06:13:59-974905 DEBUG Env flags: ['SD_LOADDEBUG=true']

I am not clear why it says "size=0MB" though, that seems odd

06:14:22-160028 INFO Autodetect: model="FLUX" class=FluxPipeline file="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5 170c306a6eb" size=0MB

c:\ai\automatic>set SD_LOAD_DEBUG=true

c:\ai\automatic>.\webui.bat --medvram --debug
Using VENV: c:\ai\automatic\venv
06:13:51-592018 INFO     Starting SD.Next
06:13:51-595756 INFO     Logger: file="c:\ai\automatic\sdnext.log" level=DEBUG size=65 mode=create
06:13:51-596581 INFO     Python version=3.11.9 platform=Windows bin="c:\ai\automatic\venv\Scripts\python.exe"
                         venv="c:\ai\automatic\venv"
06:13:51-837543 INFO     Version: app=sd.next updated=2024-09-03 hash=f9dcff6d branch=dev
                         url=https://github.com/vladmandic/automatic/tree/dev ui=dev
06:13:52-811714 INFO     Latest published version: bab17a0b4f91b41c885f10262ef8c8e70ba72faa 2024-08-31T20:57:34Z
06:13:52-828162 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows
                         release=Windows-10-10.0.22631-SP0 python=3.11.9
06:13:52-830158 DEBUG    Setting environment tuning
06:13:52-831154 INFO     HF cache folder: C:\Users\sebas\.cache\huggingface\hub
06:13:52-832152 DEBUG    Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512"
06:13:52-845303 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
06:13:52-846300 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
06:13:52-856833 INFO     nVidia CUDA toolkit detected: nvidia-smi present
06:13:53-003181 WARNING  Modified files: ['models/Reference/playgroundai--playground-v2-1024px-aesthetic.jpg']
06:13:53-102568 INFO     Verifying requirements
06:13:53-106574 INFO     Verifying packages
06:13:53-150811 DEBUG    Repository update time: Tue Sep  3 17:45:51 2024
06:13:53-151808 INFO     Startup: standard
06:13:53-152806 INFO     Verifying submodules
06:13:56-630020 DEBUG    Git detached head detected: folder="extensions-builtin/sd-extension-chainner" reattach=main
06:13:56-632015 DEBUG    Submodule: extensions-builtin/sd-extension-chainner / main
06:13:56-756354 DEBUG    Git detached head detected: folder="extensions-builtin/sd-extension-system-info" reattach=main
06:13:56-757355 DEBUG    Submodule: extensions-builtin/sd-extension-system-info / main
06:13:56-884085 DEBUG    Git detached head detected: folder="extensions-builtin/sd-webui-agent-scheduler" reattach=main
06:13:56-886071 DEBUG    Submodule: extensions-builtin/sd-webui-agent-scheduler / main
06:13:57-059945 DEBUG    Git detached head detected: folder="extensions-builtin/sdnext-modernui" reattach=dev
06:13:57-060945 DEBUG    Submodule: extensions-builtin/sdnext-modernui / dev
06:13:57-212641 DEBUG    Git detached head detected: folder="extensions-builtin/stable-diffusion-webui-rembg"
                         reattach=master
06:13:57-214606 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
06:13:57-339171 DEBUG    Git detached head detected: folder="modules/k-diffusion" reattach=master
06:13:57-340169 DEBUG    Submodule: modules/k-diffusion / master
06:13:57-468990 DEBUG    Git detached head detected: folder="wiki" reattach=master
06:13:57-470985 DEBUG    Submodule: wiki / master
06:13:57-614722 DEBUG    Register paths
06:13:57-710002 DEBUG    Installed packages: 209
06:13:57-711972 DEBUG    Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
06:13:57-900076 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\sd-extension-system-info\install.py
06:13:58-287383 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
06:13:58-667676 DEBUG    Running extension installer: C:\ai\automatic\extensions-builtin\sd-webui-controlnet\install.py
06:13:59-201435 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py
06:13:59-580423 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
06:13:59-957508 DEBUG    Extensions all: []
06:13:59-957948 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
06:13:59-959945 INFO     Verifying requirements
06:13:59-960943 DEBUG    Setup complete without errors: 1725419640
06:13:59-971914 DEBUG    Extension preload: {'extensions-builtin': 0.01, 'extensions': 0.0}
06:13:59-972911 DEBUG    Starting module: <module 'webui' from 'c:\\ai\\automatic\\webui.py'>
06:13:59-973908 INFO     Command line args: ['--medvram', '--debug'] medvram=True debug=True
06:13:59-974905 DEBUG    Env flags: ['SD_LOAD_DEBUG=true']
06:14:12-825511 INFO     Load packages: {'torch': '2.4.0+cu124', 'diffusers': '0.31.0.dev0', 'gradio': '3.43.2'}
06:14:14-614189 DEBUG    Read: file="config.json" json=35 bytes=1455 time=0.000
06:14:14-616343 DEBUG    Unknown settings: ['cross_attention_options']
06:14:14-618730 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product"
                         mode=no_grad
06:14:14-688554 INFO     Device: device=NVIDIA GeForce RTX 4080 n=1 arch=sm_90 cap=(8, 9) cuda=12.4 cudnn=90100
                         driver=560.81
06:14:14-692515 DEBUG    Read: file="html\reference.json" json=52 bytes=29118 time=0.002
06:14:15-770849 DEBUG    ONNX: version=1.19.0 provider=CUDAExecutionProvider, available=['AzureExecutionProvider',
                         'CPUExecutionProvider']
06:14:16-198852 DEBUG    Importing LDM
06:14:16-236482 DEBUG    Entering start sequence
06:14:16-239722 DEBUG    Initializing
06:14:16-309066 INFO     Available VAEs: path="models\VAE" items=0
06:14:16-310973 DEBUG    Available UNets: path="models\UNET" items=0
06:14:16-311970 DEBUG    Available T5s: path="models\T5" items=0
06:14:16-312968 INFO     Disabled extensions: ['sd-webui-controlnet', 'sdnext-modernui']
06:14:16-322196 DEBUG    Read: file="cache.json" json=2 bytes=10089 time=0.001
06:14:16-331171 DEBUG    Read: file="metadata.json" json=470 bytes=1620174 time=0.007
06:14:16-435287 DEBUG    Scanning diffusers cache: folder=models\Diffusers items=2 time=0.00
06:14:16-437281 INFO     Available models: path="models\Stable-diffusion" items=21 time=0.12
06:14:17-249884 DEBUG    Load extensions
06:14:17-833306 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py'
                         [2;36m06:14:17-826945[0m[2;36m [0m[34mINFO    [0m LoRA networks: [33mavailable[0m=[1;36m70[0m
                         [33mfolders[0m=[1;36m2[0m
06:14:18-921005 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
06:14:19-222551 DEBUG    Extensions init time: 1.97 Lora=0.50 sd-extension-chainner=0.09 sd-webui-agent-scheduler=0.98
                         stable-diffusion-webui-images-browser=0.28
06:14:19-258552 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.000
06:14:19-262356 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.001
06:14:19-266249 DEBUG    chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=8
06:14:19-268245 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="1x-ITF-SkinDiffDetail-Lite-v1"
                         path="models\ESRGAN\1x-ITF-SkinDiffDetail-Lite-v1.pth"
06:14:19-269242 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4xNMKDSuperscale_4xNMKDSuperscale"
                         path="models\ESRGAN\4xNMKDSuperscale_4xNMKDSuperscale.pth"
06:14:19-270241 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4x_NMKD-Siax_200k"
                         path="models\ESRGAN\4x_NMKD-Siax_200k.pth"
06:14:19-273231 DEBUG    Load upscalers: total=56 downloaded=11 user=3 time=0.05 ['None', 'Lanczos', 'Nearest',
                         'ChaiNNer', 'AuraSR', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
06:14:19-294380 DEBUG    Load styles: folder="models\styles" items=288 time=0.02
06:14:19-300981 DEBUG    Creating UI
06:14:19-302567 DEBUG    UI themes available: type=Standard themes=12
06:14:19-303564 INFO     UI theme: type=Standard name="black-teal"
06:14:19-311542 DEBUG    UI theme: css="C:\ai\automatic\javascript\black-teal.css" base="sdnext.css" user="None"
06:14:19-314935 DEBUG    UI initialize: txt2img
06:14:19-377925 DEBUG    Networks: page='model' items=72 subfolders=2 tab=txt2img folders=['models\\Stable-diffusion',
                         'models\\Diffusers', 'models\\Reference'] list=0.04 thumb=0.01 desc=0.01 info=0.00 workers=4
                         sort=Default
06:14:19-386901 DEBUG    Networks: page='lora' items=70 subfolders=0 tab=txt2img folders=['models\\Lora',
                         'models\\LyCORIS'] list=0.04 thumb=0.01 desc=0.02 info=0.03 workers=4 sort=Default
06:14:19-419813 DEBUG    Networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles', 'html']
                         list=0.04 thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
06:14:19-424799 DEBUG    Networks: page='embedding' items=13 subfolders=0 tab=txt2img folders=['models\\embeddings']
                         list=0.02 thumb=0.01 desc=0.00 info=0.00 workers=4 sort=Default
06:14:19-426793 DEBUG    Networks: page='vae' items=0 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.00
                         thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
06:14:19-505574 DEBUG    UI initialize: img2img
06:14:19-605194 DEBUG    UI initialize: control models=models\control
06:14:19-880172 DEBUG    Read: file="ui-config.json" json=0 bytes=2 time=0.001
06:14:19-979737 DEBUG    UI themes available: type=Standard themes=12
06:14:20-696738 DEBUG    Reading failed: C:\ai\automatic\html\extensions.json [Errno 2] No such file or directory:
                         'C:\\ai\\automatic\\html\\extensions.json'
06:14:20-698627 INFO     Extension list is empty: refresh required
06:14:21-289137 DEBUG    Extension list: processed=8 installed=8 enabled=6 disabled=2 visible=8 hidden=0
06:14:21-455240 DEBUG    Root paths: ['c:\\ai\\automatic']
06:14:21-560365 INFO     Local URL: http://127.0.0.1:7860/
06:14:21-561362 DEBUG    Gradio functions: registered=2364
06:14:21-564422 DEBUG    FastAPI middleware: ['Middleware', 'Middleware']
06:14:21-567416 DEBUG    Creating API
06:14:21-735966 INFO     [AgentScheduler] Task queue is empty
06:14:21-737960 INFO     [AgentScheduler] Registering APIs
06:14:22-071070 DEBUG    Scripts setup: ['IP Adapters:0.022', 'AnimateDiff:0.009', 'X/Y/Z Grid:0.01', 'Face:0.013',
                         'Image-to-Video:0.006']
06:14:22-114124 DEBUG    Save: file="metadata.json" json=559 bytes=1813560 time=0.041
06:14:22-115490 INFO     Model metadata saved: file="metadata.json" items=89 time=1.88
06:14:22-116490 DEBUG    Torch mode: deterministic=False
06:14:22-150054 INFO     Torch override VAE dtype: no-half set
06:14:22-151052 DEBUG    Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=True upscast=False
06:14:22-152049 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float32 unet=torch.float16
                         context=no_grad fp16=True bf16=None optimization=Scaled-Dot-Product
06:14:22-154044 DEBUG    Model requested: fn=<lambda>
06:14:22-155041 INFO     Select: model="Diffusers\Disty0/FLUX.1-dev-qint4 [82811df42b]"
06:14:22-157036 DEBUG    Load model: existing=False
                         target=models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d
                         5170c306a6eb info=None
06:14:22-159030 DEBUG    Diffusers loading:
                         path="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb"
06:14:22-160028 INFO     Autodetect: model="FLUX" class=FluxPipeline
                         file="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb" size=0MB
06:14:22-165194 DEBUG    Loading FLUX: model="Diffusers\Disty0/FLUX.1-dev-qint4" unet="None" t5="None" vae="None"
                         quant=qint4 offload=model dtype=torch.float16
06:14:22-687065 ERROR    Loading FLUX: Failed to load Quanto transformer: Repo id must use alphanumeric chars or '-',
                         '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is
                         96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
06:14:22-688910 ERROR    FLUX Quanto:: HFValidationError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\model_flux.py:50 in load_flux_quanto                                                         │
│                                                                                                                      │
│    49 │   │   │   repo_id = checkpoint_info.name.replace('Diffusers/', '')                                           │
│ ❱  50 │   │   │   quantization_map = hf_hub_download(repo_id, subfolder='transformer', filename='quantization_map.js │
│    51 │   │   with open(quantization_map, "r", encoding='utf8') as f:                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_deprecation.py:101 in inner_f                          │
│                                                                                                                      │
│   100 │   │   │   │   warnings.warn(message, FutureWarning)                                                          │
│ ❱ 101 │   │   │   return f(*args, **kwargs)                                                                          │
│   102                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:106 in _inner_fn                         │
│                                                                                                                      │
│   105 │   │   │   if arg_name in ["repo_id", "from_id", "to_id"]:                                                    │
│ ❱ 106 │   │   │   │   validate_repo_id(arg_value)                                                                    │
│   107                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:160 in validate_repo_id                  │
│                                                                                                                      │
│   159 │   if not REPO_ID_REGEX.match(repo_id):                                                                       │
│ ❱ 160 │   │   raise HFValidationError(                                                                               │
│   161 │   │   │   "Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are"                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
06:14:22-743187 ERROR    Loading FLUX: Failed to load Quanto text encoder: Repo id must use alphanumeric chars or '-',
                         '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is
                         96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
06:14:22-744666 ERROR    FLUX Quanto:: HFValidationError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\model_flux.py:74 in load_flux_quanto                                                         │
│                                                                                                                      │
│    73 │   │   │   repo_id = checkpoint_info.name.replace('Diffusers/', '')                                           │
│ ❱  74 │   │   │   quantization_map = hf_hub_download(repo_id, subfolder='text_encoder_2', filename='quantization_map │
│    75 │   │   with open(quantization_map, "r", encoding='utf8') as f:                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_deprecation.py:101 in inner_f                          │
│                                                                                                                      │
│   100 │   │   │   │   warnings.warn(message, FutureWarning)                                                          │
│ ❱ 101 │   │   │   return f(*args, **kwargs)                                                                          │
│   102                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:106 in _inner_fn                         │
│                                                                                                                      │
│   105 │   │   │   if arg_name in ["repo_id", "from_id", "to_id"]:                                                    │
│ ❱ 106 │   │   │   │   validate_repo_id(arg_value)                                                                    │
│   107                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:160 in validate_repo_id                  │
│                                                                                                                      │
│   159 │   if not REPO_ID_REGEX.match(repo_id):                                                                       │
│ ❱ 160 │   │   raise HFValidationError(                                                                               │
│   161 │   │   │   "Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are"                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
06:14:22-767879 DEBUG    Loading FLUX: preloaded=[]

Loading checkpoint shards:   0%|                                                                 | 0/2 [00:00<?, ?it/s]
[A

Loading checkpoint shards:  50%|############################5                            | 1/2 [00:09<00:09,  9.62s/it]
[A

Loading checkpoint shards: 100%|#########################################################| 2/2 [00:17<00:00,  8.85s/it]
[A
Loading checkpoint shards: 100%|#########################################################| 2/2 [00:17<00:00,  8.97s/it]

Loading pipeline components... 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7/7  [ 0:01:02 < 0:00:00 , 0 C/s ]
06:15:26-090091 INFO     Load embeddings: loaded=0 skipped=13 time=0.05
06:15:26-171873 DEBUG    Setting model VAE: no-half=True
06:15:26-172870 DEBUG    Setting model: slicing=True
06:15:26-173867 DEBUG    Setting model: tiling=True
06:15:26-174865 DEBUG    Setting model: attention=Scaled-Dot-Product
06:15:26-195809 DEBUG    Setting model: offload=model
06:15:26-444412 DEBUG    GC: utilization={'gpu': 8, 'ram': 54, 'threshold': 80} gc={'collected': 305, 'saved': 0.0}
                         before={'gpu': 1.33, 'ram': 34.56} after={'gpu': 1.33, 'ram': 34.56, 'retries': 0, 'oom': 0}
                         device=cuda fn=load_diffuser time=0.22
06:15:26-446409 INFO     Load model: time=64.07 load=63.88 embeddings=0.05 options=0.10 native=1024 {'ram': {'used':
                         34.56, 'total': 63.92}, 'gpu': {'used': 1.33, 'total': 15.99}, 'retries': 0, 'oom': 0}
06:15:26-449333 DEBUG    Script callback init time: image_browser.py:ui_tabs=0.57 system-info.py:app_started=0.06
                         task_scheduler.py:app_started=0.35
06:15:26-450299 INFO     Startup time: 86.47 torch=8.43 gradio=2.46 diffusers=1.95 libraries=3.37 samplers=0.07
                         extensions=1.97 models=0.12 face-restore=0.81 upscalers=0.05 ui-en=0.22 ui-txt2img=0.06
                         ui-img2img=0.06 ui-control=0.11 ui-settings=0.24 ui-extensions=1.20 ui-defaults=0.09
                         launch=0.16 api=0.09 app-started=0.41 checkpoint=64.38
06:15:26-452294 DEBUG    Save: file="config.json" json=35 bytes=1408 time=0.003
06:15:26-455284 DEBUG    Unused settings: ['cross_attention_options']
06:16:00-466853 DEBUG    Server: alive=True jobs=1 requests=1 uptime=107 memory=34.56/63.92 backend=Backend.DIFFUSERS
                         state=idle
vladmandic commented 2 months ago

hopefully should be fixed now in dev - if its not, update here (with new logs) and i'll reopen the issue.

SAC020 commented 2 months ago

hopefully should be fixed now in dev - if its not, update here (with new logs) and i'll reopen the issue.

Doesn't seem solved to me, it persists. The only change I see is that it does not fall back automatically to flux-dev, it just fails

Want me to try and re-download the model?


c:\ai\automatic>.\webui.bat --medvram --debug
Using VENV: c:\ai\automatic\venv
17:02:19-522979 INFO     Starting SD.Next
17:02:19-526505 INFO     Logger: file="c:\ai\automatic\sdnext.log" level=DEBUG size=65 mode=create
17:02:19-527503 INFO     Python version=3.11.9 platform=Windows bin="c:\ai\automatic\venv\Scripts\python.exe"
                         venv="c:\ai\automatic\venv"
17:02:19-730245 INFO     Version: app=sd.next updated=2024-09-04 hash=a1b67020 branch=dev
                         url=https://github.com/vladmandic/automatic/tree/dev ui=dev
17:02:20-583934 INFO     Latest published version: bab17a0b4f91b41c885f10262ef8c8e70ba72faa 2024-08-31T20:57:34Z
17:02:20-599085 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows
                         release=Windows-10-10.0.22631-SP0 python=3.11.9
17:02:20-601079 DEBUG    Setting environment tuning
17:02:20-602076 INFO     HF cache folder: C:\Users\sebas\.cache\huggingface\hub
17:02:20-603073 DEBUG    Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512"
17:02:20-613047 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
17:02:20-615041 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
17:02:20-625043 INFO     nVidia CUDA toolkit detected: nvidia-smi present
17:02:20-711753 WARNING  Modified files: ['models/Reference/playgroundai--playground-v2-1024px-aesthetic.jpg']
17:02:20-805499 INFO     Verifying requirements
17:02:20-809366 INFO     Verifying packages
17:02:20-853313 DEBUG    Repository update time: Wed Sep  4 16:32:08 2024
17:02:20-854310 INFO     Startup: standard
17:02:20-855308 INFO     Verifying submodules
17:02:24-073925 DEBUG    Git detached head detected: folder="extensions-builtin/sd-extension-chainner" reattach=main
17:02:24-075892 DEBUG    Submodule: extensions-builtin/sd-extension-chainner / main
17:02:24-201080 DEBUG    Git detached head detected: folder="extensions-builtin/sd-extension-system-info" reattach=main
17:02:24-202078 DEBUG    Submodule: extensions-builtin/sd-extension-system-info / main
17:02:24-327422 DEBUG    Git detached head detected: folder="extensions-builtin/sd-webui-agent-scheduler" reattach=main
17:02:24-328419 DEBUG    Submodule: extensions-builtin/sd-webui-agent-scheduler / main
17:02:24-502986 DEBUG    Git detached head detected: folder="extensions-builtin/sdnext-modernui" reattach=dev
17:02:24-503983 DEBUG    Submodule: extensions-builtin/sdnext-modernui / dev
17:02:24-649832 DEBUG    Git detached head detected: folder="extensions-builtin/stable-diffusion-webui-rembg"
                         reattach=master
17:02:24-650856 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
17:02:24-773333 DEBUG    Git detached head detected: folder="modules/k-diffusion" reattach=master
17:02:24-774427 DEBUG    Submodule: modules/k-diffusion / master
17:02:24-897645 DEBUG    Git detached head detected: folder="wiki" reattach=master
17:02:24-899640 DEBUG    Submodule: wiki / master
17:02:24-972613 DEBUG    Register paths
17:02:25-067343 DEBUG    Installed packages: 209
17:02:25-068608 DEBUG    Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
17:02:25-260125 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\sd-extension-system-info\install.py
17:02:25-636886 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
17:02:26-014094 DEBUG    Running extension installer: C:\ai\automatic\extensions-builtin\sd-webui-controlnet\install.py
17:02:26-552536 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py
17:02:26-932773 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
17:02:27-303495 DEBUG    Extensions all: []
17:02:27-304492 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
17:02:27-305517 INFO     Verifying requirements
17:02:27-306517 DEBUG    Setup complete without errors: 1725458547
17:02:27-314465 DEBUG    Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
17:02:27-315462 DEBUG    Starting module: <module 'webui' from 'c:\\ai\\automatic\\webui.py'>
17:02:27-316460 INFO     Command line args: ['--medvram', '--debug'] medvram=True debug=True
17:02:27-317458 DEBUG    Env flags: ['SD_LOAD_DEBUG=true']
17:02:33-189459 INFO     Load packages: {'torch': '2.4.0+cu124', 'diffusers': '0.31.0.dev0', 'gradio': '3.43.2'}
17:02:34-227175 DEBUG    Read: file="config.json" json=35 bytes=1517 time=0.000
17:02:34-229169 DEBUG    Unknown settings: ['cross_attention_options']
17:02:34-231041 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product"
                         mode=no_grad
17:02:34-294317 INFO     Device: device=NVIDIA GeForce RTX 4080 n=1 arch=sm_90 cap=(8, 9) cuda=12.4 cudnn=90100
                         driver=560.81
17:02:34-296337 DEBUG    Read: file="html\reference.json" json=52 bytes=29118 time=0.000
17:02:34-675257 DEBUG    ONNX: version=1.19.0 provider=CUDAExecutionProvider, available=['AzureExecutionProvider',
                         'CPUExecutionProvider']
17:02:34-851643 DEBUG    Importing LDM
17:02:34-869313 DEBUG    Entering start sequence
17:02:34-872306 DEBUG    Initializing
17:02:34-897238 INFO     Available VAEs: path="models\VAE" items=0
17:02:34-899233 DEBUG    Available UNets: path="models\UNET" items=0
17:02:34-900612 DEBUG    Available T5s: path="models\T5" items=0
17:02:34-902402 INFO     Disabled extensions: ['sd-webui-controlnet', 'sdnext-modernui']
17:02:34-904118 DEBUG    Read: file="cache.json" json=2 bytes=10089 time=0.000
17:02:34-911099 DEBUG    Read: file="metadata.json" json=559 bytes=1861768 time=0.005
17:02:34-917083 DEBUG    Scanning diffusers cache: folder=models\Diffusers items=2 time=0.00
17:02:34-918215 INFO     Available models: path="models\Stable-diffusion" items=21 time=0.01
17:02:35-114252 DEBUG    Load extensions
17:02:35-159735 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py'
                         [2;36m17:02:35-156529[0m[2;36m [0m[34mINFO    [0m LoRA networks: [33mavailable[0m=[1;36m70[0m
                         [33mfolders[0m=[1;36m3[0m
17:02:35-533324 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
17:02:35-729270 DEBUG    Extensions init time: 0.61 sd-webui-agent-scheduler=0.33
                         stable-diffusion-webui-images-browser=0.18
17:02:35-742236 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.000
17:02:35-743234 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.000
17:02:35-745051 DEBUG    chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=8
17:02:35-747045 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="1x-ITF-SkinDiffDetail-Lite-v1"
                         path="models\ESRGAN\1x-ITF-SkinDiffDetail-Lite-v1.pth"
17:02:35-748042 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4xNMKDSuperscale_4xNMKDSuperscale"
                         path="models\ESRGAN\4xNMKDSuperscale_4xNMKDSuperscale.pth"
17:02:35-750037 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4x_NMKD-Siax_200k"
                         path="models\ESRGAN\4x_NMKD-Siax_200k.pth"
17:02:35-753029 DEBUG    Load upscalers: total=56 downloaded=11 user=3 time=0.02 ['None', 'Lanczos', 'Nearest',
                         'ChaiNNer', 'AuraSR', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
17:02:35-769823 DEBUG    Load styles: folder="models\styles" items=288 time=0.02
17:02:35-772171 DEBUG    Creating UI
17:02:35-774007 DEBUG    UI themes available: type=Standard themes=12
17:02:35-774868 INFO     UI theme: type=Standard name="black-teal"
17:02:35-782847 DEBUG    UI theme: css="C:\ai\automatic\javascript\black-teal.css" base="sdnext.css" user="None"
17:02:35-785619 DEBUG    UI initialize: txt2img
17:02:35-843491 DEBUG    Networks: page='model' items=72 subfolders=2 tab=txt2img folders=['models\\Stable-diffusion',
                         'models\\Diffusers', 'models\\Reference'] list=0.04 thumb=0.01 desc=0.01 info=0.00 workers=4
                         sort=Default
17:02:35-852658 DEBUG    Networks: page='lora' items=70 subfolders=0 tab=txt2img folders=['models\\Lora',
                         'models\\LyCORIS'] list=0.04 thumb=0.01 desc=0.02 info=0.02 workers=4 sort=Default
17:02:35-884463 DEBUG    Networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles', 'html']
                         list=0.04 thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
17:02:35-889531 DEBUG    Networks: page='embedding' items=13 subfolders=0 tab=txt2img folders=['models\\embeddings']
                         list=0.03 thumb=0.01 desc=0.00 info=0.00 workers=4 sort=Default
17:02:35-891497 DEBUG    Networks: page='vae' items=0 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.00
                         thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
17:02:35-967295 DEBUG    UI initialize: img2img
17:02:36-214436 DEBUG    UI initialize: control models=models\control
17:02:36-476429 DEBUG    Read: file="ui-config.json" json=0 bytes=2 time=0.000
17:02:36-575191 DEBUG    UI themes available: type=Standard themes=12
17:02:37-111806 DEBUG    Reading failed: C:\ai\automatic\html\extensions.json [Errno 2] No such file or directory:
                         'C:\\ai\\automatic\\html\\extensions.json'
17:02:37-113483 INFO     Extension list is empty: refresh required
17:02:37-696067 DEBUG    Extension list: processed=8 installed=8 enabled=6 disabled=2 visible=8 hidden=0
17:02:38-030172 DEBUG    Root paths: ['c:\\ai\\automatic']
17:02:38-105210 INFO     Local URL: http://127.0.0.1:7860/
17:02:38-106207 DEBUG    Gradio functions: registered=2364
17:02:38-107206 DEBUG    FastAPI middleware: ['Middleware', 'Middleware']
17:02:38-110196 DEBUG    Creating API
17:02:38-281876 INFO     [AgentScheduler] Task queue is empty
17:02:38-282874 INFO     [AgentScheduler] Registering APIs
17:02:38-402583 DEBUG    Scripts setup: ['IP Adapters:0.021', 'AnimateDiff:0.008', 'X/Y/Z Grid:0.01', 'Face:0.163']
17:02:38-404409 DEBUG    Model metadata: file="metadata.json" no changes
17:02:38-405437 DEBUG    Torch mode: deterministic=False
17:02:38-433707 INFO     Torch override VAE dtype: no-half set
17:02:38-435336 DEBUG    Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=True upscast=False
17:02:38-435862 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float32 unet=torch.float16
                         context=no_grad fp16=True bf16=None optimization=Scaled-Dot-Product
17:02:38-438857 DEBUG    Model requested: fn=<lambda>
17:02:38-439855 INFO     Select: model="Diffusers\Disty0/FLUX.1-dev-qint4 [82811df42b]"
17:02:38-440851 DEBUG    Load model: existing=False
                         target=models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d
                         5170c306a6eb info=None
17:02:38-441849 DEBUG    Diffusers loading:
                         path="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb"
17:02:38-442847 INFO     Autodetect: model="FLUX" class=FluxPipeline
                         file="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb" size=0MB
17:02:38-446836 DEBUG    Loading FLUX: model="Diffusers\Disty0/FLUX.1-dev-qint4"
                         repo="Diffusers\Disty0/FLUX.1-dev-qint4" unet="None" t5="None" vae="None" quant=qint4
                         offload=model dtype=torch.float16
17:02:38-447833 TRACE    Loading FLUX: config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16,
                         'load_connected_pipeline': True, 'safety_checker': None, 'requires_safety_checker': False}
17:02:38-940479 TRACE    Loading FLUX: quantization
                         map="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d51
                         70c306a6eb\transformer\quantization_map.json" repo="Diffusers\Disty0/FLUX.1-dev-qint4"
                         component="transformer"
17:02:38-942061 ERROR    Loading FLUX: Failed to load Quanto transformer: Repo id must use alphanumeric chars or '-',
                         '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is
                         96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
17:02:38-943709 ERROR    FLUX Quanto:: HFValidationError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\model_flux.py:50 in load_flux_quanto                                                         │
│                                                                                                                      │
│    49 │   │   │   repo_id = checkpoint_info.name.replace('Diffusers/', '').replace('models--', '').replace('--', '/' │
│ ❱  50 │   │   │   quantization_map = hf_hub_download(repo_id, subfolder='transformer', filename='quantization_map.js │
│    51 │   │   with open(quantization_map, "r", encoding='utf8') as f:                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_deprecation.py:101 in inner_f                          │
│                                                                                                                      │
│   100 │   │   │   │   warnings.warn(message, FutureWarning)                                                          │
│ ❱ 101 │   │   │   return f(*args, **kwargs)                                                                          │
│   102                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:106 in _inner_fn                         │
│                                                                                                                      │
│   105 │   │   │   if arg_name in ["repo_id", "from_id", "to_id"]:                                                    │
│ ❱ 106 │   │   │   │   validate_repo_id(arg_value)                                                                    │
│   107                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:160 in validate_repo_id                  │
│                                                                                                                      │
│   159 │   if not REPO_ID_REGEX.match(repo_id):                                                                       │
│ ❱ 160 │   │   raise HFValidationError(                                                                               │
│   161 │   │   │   "Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are"                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
17:02:38-990000 TRACE    Loading FLUX: quantization
                         map="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d51
                         70c306a6eb\text_encoder_2\quantization_map.json" repo="Diffusers\Disty0/FLUX.1-dev-qint4"
                         component="text_encoder_2"
17:02:38-991000 ERROR    Loading FLUX: Failed to load Quanto text encoder: Repo id must use alphanumeric chars or '-',
                         '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is
                         96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
17:02:38-992994 ERROR    FLUX Quanto:: HFValidationError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\model_flux.py:75 in load_flux_quanto                                                         │
│                                                                                                                      │
│    74 │   │   │   repo_id = checkpoint_info.name.replace('Diffusers/', '').replace('models--', '').replace('--', '/' │
│ ❱  75 │   │   │   quantization_map = hf_hub_download(repo_id, subfolder='text_encoder_2', filename='quantization_map │
│    76 │   │   with open(quantization_map, "r", encoding='utf8') as f:                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_deprecation.py:101 in inner_f                          │
│                                                                                                                      │
│   100 │   │   │   │   warnings.warn(message, FutureWarning)                                                          │
│ ❱ 101 │   │   │   return f(*args, **kwargs)                                                                          │
│   102                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:106 in _inner_fn                         │
│                                                                                                                      │
│   105 │   │   │   if arg_name in ["repo_id", "from_id", "to_id"]:                                                    │
│ ❱ 106 │   │   │   │   validate_repo_id(arg_value)                                                                    │
│   107                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:160 in validate_repo_id                  │
│                                                                                                                      │
│   159 │   if not REPO_ID_REGEX.match(repo_id):                                                                       │
│ ❱ 160 │   │   raise HFValidationError(                                                                               │
│   161 │   │   │   "Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are"                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
17:02:39-015501 DEBUG    Loading FLUX: preloaded=[]
17:02:39-016498 ERROR    Diffusers Failed loading model:
                         models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5170c30
                         6a6eb Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-'
                         and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
17:02:39-018493 ERROR    Load: HFValidationError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\sd_models.py:1156 in load_diffuser                                                           │
│                                                                                                                      │
│   1155 │   │   │   │   │   from modules.model_flux import load_flux                                                  │
│ ❱ 1156 │   │   │   │   │   sd_model = load_flux(checkpoint_info, diffusers_load_config)                              │
│   1157 │   │   │   │   except Exception as e:                                                                        │
│                                                                                                                      │
│ C:\ai\automatic\modules\model_flux.py:241 in load_flux                                                               │
│                                                                                                                      │
│   240 │   shared.log.debug(f'Loading FLUX: preloaded={list(components)}')                                            │
│ ❱ 241 │   pipe = diffusers.FluxPipeline.from_pretrained(repo_id, cache_dir=shared.opts.diffusers_dir, **components,  │
│   242 │   return pipe                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:114 in _inner_fn                         │
│                                                                                                                      │
│   113 │   │                                                                                                          │
│ ❱ 114 │   │   return fn(*args, **kwargs)                                                                             │
│   115                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py:706 in from_pretrained                  │
│                                                                                                                      │
│    705 │   │   │   │   )                                                                                             │
│ ❱  706 │   │   │   cached_folder = cls.download(                                                                     │
│    707 │   │   │   │   pretrained_model_name_or_path,                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:114 in _inner_fn                         │
│                                                                                                                      │
│   113 │   │                                                                                                          │
│ ❱ 114 │   │   return fn(*args, **kwargs)                                                                             │
│   115                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py:1235 in download                        │
│                                                                                                                      │
│   1234 │   │   │   try:                                                                                              │
│ ❱ 1235 │   │   │   │   info = model_info(pretrained_model_name, token=token, revision=revision)                      │
│   1236 │   │   │   except (HTTPError, OfflineModeIsEnabled, requests.ConnectionError) as e:                          │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:106 in _inner_fn                         │
│                                                                                                                      │
│   105 │   │   │   if arg_name in ["repo_id", "from_id", "to_id"]:                                                    │
│ ❱ 106 │   │   │   │   validate_repo_id(arg_value)                                                                    │
│   107                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:160 in validate_repo_id                  │
│                                                                                                                      │
│   159 │   if not REPO_ID_REGEX.match(repo_id):                                                                       │
│ ❱ 160 │   │   raise HFValidationError(                                                                               │
│   161 │   │   │   "Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are"                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'Diffusers\Disty0/FLUX.1-dev-qint4'.
17:02:39-260847 DEBUG    Script callback init time: image_browser.py:ui_tabs=0.40 system-info.py:app_started=0.06
                         task_scheduler.py:app_started=0.14
17:02:39-262165 INFO     Startup time: 11.94 torch=4.17 gradio=1.29 diffusers=0.41 libraries=1.66 extensions=0.61
                         face-restore=0.20 ui-en=0.21 ui-txt2img=0.06 ui-img2img=0.21 ui-control=0.11 ui-settings=0.23
                         ui-extensions=1.02 ui-defaults=0.26 launch=0.13 api=0.09 app-started=0.20 checkpoint=0.86
17:02:39-264168 DEBUG    Save: file="config.json" json=35 bytes=1470 time=0.004
17:02:39-266162 DEBUG    Unused settings: ['cross_attention_options']
vladmandic commented 2 months ago

ah, need to handle windows path, my bad - try update and all again?

SAC020 commented 2 months ago

Not working, but different errors

c:\ai\automatic>.\webui.bat --medvram --debug
Using VENV: c:\ai\automatic\venv
17:59:23-868420 INFO     Starting SD.Next
17:59:23-872025 INFO     Logger: file="c:\ai\automatic\sdnext.log" level=DEBUG size=65 mode=create
17:59:23-873792 INFO     Python version=3.11.9 platform=Windows bin="c:\ai\automatic\venv\Scripts\python.exe"
                         venv="c:\ai\automatic\venv"
17:59:24-077348 INFO     Version: app=sd.next updated=2024-09-04 hash=ce94b5a9 branch=dev
                         url=https://github.com/vladmandic/automatic/tree/dev ui=dev
17:59:24-881314 INFO     Latest published version: bab17a0b4f91b41c885f10262ef8c8e70ba72faa 2024-08-31T20:57:34Z
17:59:24-895164 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows
                         release=Windows-10-10.0.22631-SP0 python=3.11.9
17:59:24-896162 DEBUG    Setting environment tuning
17:59:24-898157 INFO     HF cache folder: C:\Users\sebas\.cache\huggingface\hub
17:59:24-898157 DEBUG    Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512"
17:59:24-909127 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
17:59:24-910125 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
17:59:24-920098 INFO     nVidia CUDA toolkit detected: nvidia-smi present
17:59:25-002448 WARNING  Modified files: ['models/Reference/playgroundai--playground-v2-1024px-aesthetic.jpg']
17:59:25-097194 INFO     Verifying requirements
17:59:25-100893 INFO     Verifying packages
17:59:25-143779 DEBUG    Repository update time: Wed Sep  4 17:39:01 2024
17:59:25-145746 INFO     Startup: standard
17:59:25-146125 INFO     Verifying submodules
17:59:28-032578 DEBUG    Git detached head detected: folder="extensions-builtin/sd-extension-chainner" reattach=main
17:59:28-032983 DEBUG    Submodule: extensions-builtin/sd-extension-chainner / main
17:59:28-161529 DEBUG    Git detached head detected: folder="extensions-builtin/sd-extension-system-info" reattach=main
17:59:28-162526 DEBUG    Submodule: extensions-builtin/sd-extension-system-info / main
17:59:28-289859 DEBUG    Git detached head detected: folder="extensions-builtin/sd-webui-agent-scheduler" reattach=main
17:59:28-290852 DEBUG    Submodule: extensions-builtin/sd-webui-agent-scheduler / main
17:59:28-474036 DEBUG    Git detached head detected: folder="extensions-builtin/sdnext-modernui" reattach=dev
17:59:28-475033 DEBUG    Submodule: extensions-builtin/sdnext-modernui / dev
17:59:28-622991 DEBUG    Git detached head detected: folder="extensions-builtin/stable-diffusion-webui-rembg"
                         reattach=master
17:59:28-624959 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
17:59:28-752795 DEBUG    Git detached head detected: folder="modules/k-diffusion" reattach=master
17:59:28-753792 DEBUG    Submodule: modules/k-diffusion / master
17:59:28-883211 DEBUG    Git detached head detected: folder="wiki" reattach=master
17:59:28-885206 DEBUG    Submodule: wiki / master
17:59:28-959796 DEBUG    Register paths
17:59:29-058532 DEBUG    Installed packages: 209
17:59:29-060598 DEBUG    Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
17:59:29-242114 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\sd-extension-system-info\install.py
17:59:29-618762 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
17:59:29-994073 DEBUG    Running extension installer: C:\ai\automatic\extensions-builtin\sd-webui-controlnet\install.py
17:59:30-538475 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py
17:59:30-915614 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
17:59:31-286851 DEBUG    Extensions all: []
17:59:31-288629 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
17:59:31-290132 INFO     Verifying requirements
17:59:31-290132 DEBUG    Setup complete without errors: 1725461971
17:59:31-298110 DEBUG    Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
17:59:31-299108 DEBUG    Starting module: <module 'webui' from 'c:\\ai\\automatic\\webui.py'>
17:59:31-300106 INFO     Command line args: ['--medvram', '--debug'] medvram=True debug=True
17:59:31-301103 DEBUG    Env flags: ['SD_LOAD_DEBUG=true']
17:59:37-111449 INFO     Load packages: {'torch': '2.4.0+cu124', 'diffusers': '0.31.0.dev0', 'gradio': '3.43.2'}
17:59:38-150448 DEBUG    Read: file="config.json" json=35 bytes=1517 time=0.000
17:59:38-151446 DEBUG    Unknown settings: ['cross_attention_options']
17:59:38-154438 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product"
                         mode=no_grad
17:59:38-212319 INFO     Device: device=NVIDIA GeForce RTX 4080 n=1 arch=sm_90 cap=(8, 9) cuda=12.4 cudnn=90100
                         driver=560.81
17:59:38-214321 DEBUG    Read: file="html\reference.json" json=52 bytes=29118 time=0.000
17:59:38-591437 DEBUG    ONNX: version=1.19.0 provider=CUDAExecutionProvider, available=['AzureExecutionProvider',
                         'CPUExecutionProvider']
17:59:38-766639 DEBUG    Importing LDM
17:59:38-784209 DEBUG    Entering start sequence
17:59:38-786424 DEBUG    Initializing
17:59:38-812422 INFO     Available VAEs: path="models\VAE" items=0
17:59:38-814416 DEBUG    Available UNets: path="models\UNET" items=0
17:59:38-815612 DEBUG    Available T5s: path="models\T5" items=0
17:59:38-816639 INFO     Disabled extensions: ['sd-webui-controlnet', 'sdnext-modernui']
17:59:38-818606 DEBUG    Read: file="cache.json" json=2 bytes=10089 time=0.000
17:59:38-826585 DEBUG    Read: file="metadata.json" json=559 bytes=1861768 time=0.006
17:59:38-831254 DEBUG    Scanning diffusers cache: folder=models\Diffusers items=2 time=0.00
17:59:38-833117 INFO     Available models: path="models\Stable-diffusion" items=21 time=0.01
17:59:39-029780 DEBUG    Load extensions
17:59:39-077641 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py'
                         [2;36m17:59:39-073652[0m[2;36m [0m[34mINFO    [0m LoRA networks: [33mavailable[0m=[1;36m70[0m
                         [33mfolders[0m=[1;36m2[0m
17:59:39-455871 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
17:59:39-653674 DEBUG    Extensions init time: 0.62 sd-webui-agent-scheduler=0.33
                         stable-diffusion-webui-images-browser=0.18
17:59:39-666640 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.000
17:59:39-667607 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.000
17:59:39-670600 DEBUG    chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=8
17:59:39-672594 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="1x-ITF-SkinDiffDetail-Lite-v1"
                         path="models\ESRGAN\1x-ITF-SkinDiffDetail-Lite-v1.pth"
17:59:39-673592 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4xNMKDSuperscale_4xNMKDSuperscale"
                         path="models\ESRGAN\4xNMKDSuperscale_4xNMKDSuperscale.pth"
17:59:39-674589 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4x_NMKD-Siax_200k"
                         path="models\ESRGAN\4x_NMKD-Siax_200k.pth"
17:59:39-677581 DEBUG    Load upscalers: total=56 downloaded=11 user=3 time=0.02 ['None', 'Lanczos', 'Nearest',
                         'ChaiNNer', 'AuraSR', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
17:59:39-695532 DEBUG    Load styles: folder="models\styles" items=288 time=0.02
17:59:39-698527 DEBUG    Creating UI
17:59:39-699524 DEBUG    UI themes available: type=Standard themes=12
17:59:39-701517 INFO     UI theme: type=Standard name="black-teal"
17:59:39-708498 DEBUG    UI theme: css="C:\ai\automatic\javascript\black-teal.css" base="sdnext.css" user="None"
17:59:39-710492 DEBUG    UI initialize: txt2img
17:59:39-769917 DEBUG    Networks: page='model' items=72 subfolders=2 tab=txt2img folders=['models\\Stable-diffusion',
                         'models\\Diffusers', 'models\\Reference'] list=0.04 thumb=0.01 desc=0.00 info=0.00 workers=4
                         sort=Default
17:59:39-777896 DEBUG    Networks: page='lora' items=70 subfolders=0 tab=txt2img folders=['models\\Lora',
                         'models\\LyCORIS'] list=0.04 thumb=0.01 desc=0.02 info=0.02 workers=4 sort=Default
17:59:39-809007 DEBUG    Networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles', 'html']
                         list=0.04 thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
17:59:39-813993 DEBUG    Networks: page='embedding' items=13 subfolders=0 tab=txt2img folders=['models\\embeddings']
                         list=0.03 thumb=0.01 desc=0.00 info=0.00 workers=4 sort=Default
17:59:39-816411 DEBUG    Networks: page='vae' items=0 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.00
                         thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
17:59:39-892210 DEBUG    UI initialize: img2img
17:59:40-135754 DEBUG    UI initialize: control models=models\control
17:59:40-400507 DEBUG    Read: file="ui-config.json" json=0 bytes=2 time=0.001
17:59:40-498271 DEBUG    UI themes available: type=Standard themes=12
17:59:41-034803 DEBUG    Reading failed: C:\ai\automatic\html\extensions.json [Errno 2] No such file or directory:
                         'C:\\ai\\automatic\\html\\extensions.json'
17:59:41-037061 INFO     Extension list is empty: refresh required
17:59:41-621196 DEBUG    Extension list: processed=8 installed=8 enabled=6 disabled=2 visible=8 hidden=0
17:59:41-949475 DEBUG    Root paths: ['c:\\ai\\automatic']
17:59:42-025795 INFO     Local URL: http://127.0.0.1:7860/
17:59:42-026795 DEBUG    Gradio functions: registered=2363
17:59:42-028759 DEBUG    FastAPI middleware: ['Middleware', 'Middleware']
17:59:42-031752 DEBUG    Creating API
17:59:42-198334 INFO     [AgentScheduler] Task queue is empty
17:59:42-200302 INFO     [AgentScheduler] Registering APIs
17:59:42-316990 DEBUG    Scripts setup: ['IP Adapters:0.021', 'AnimateDiff:0.007', 'X/Y/Z Grid:0.011', 'Face:0.012']
17:59:42-318777 DEBUG    Model metadata: file="metadata.json" no changes
17:59:42-320135 DEBUG    Torch mode: deterministic=False
17:59:42-348090 INFO     Torch override VAE dtype: no-half set
17:59:42-349087 DEBUG    Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=True upscast=False
17:59:42-349776 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float32 unet=torch.float16
                         context=no_grad fp16=True bf16=None optimization=Scaled-Dot-Product
17:59:42-351773 DEBUG    Model requested: fn=<lambda>
17:59:42-352772 INFO     Select: model="Diffusers\Disty0/FLUX.1-dev-qint4 [82811df42b]"
17:59:42-354766 DEBUG    Load model: existing=False
                         target=models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d
                         5170c306a6eb info=None
17:59:42-355764 DEBUG    Diffusers loading:
                         path="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb"
17:59:42-356761 INFO     Autodetect: model="FLUX" class=FluxPipeline
                         file="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb" size=0MB
17:59:42-359753 DEBUG    Loading FLUX: model="Diffusers\Disty0/FLUX.1-dev-qint4" repo="Disty0/FLUX.1-dev-qint4"
                         unet="None" t5="None" vae="None" quant=qint4 offload=model dtype=torch.float16
17:59:42-361748 TRACE    Loading FLUX: config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16,
                         'load_connected_pipeline': True, 'safety_checker': None, 'requires_safety_checker': False}
17:59:42-855418 TRACE    Loading FLUX: quantization
                         map="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d51
                         70c306a6eb\transformer\quantization_map.json" repo="Diffusers\Disty0/FLUX.1-dev-qint4"
                         component="transformer"
17:59:42-857413 ERROR    Loading FLUX: Failed to load Quanto transformer: hf_hub_download() got an unexpected keyword
                         argument 'low_cpu_mem_usage'
17:59:42-857845 ERROR    FLUX Quanto:: TypeError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\model_flux.py:50 in load_flux_quanto                                                         │
│                                                                                                                      │
│    49 │   │   │   repo_id = checkpoint_info.name.replace('Diffusers/', '').replace('Diffusers\\', '').replace('model │
│ ❱  50 │   │   │   quantization_map = hf_hub_download(repo_id, subfolder='transformer', filename='quantization_map.js │
│    51 │   │   with open(quantization_map, "r", encoding='utf8') as f:                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_deprecation.py:101 in inner_f                          │
│                                                                                                                      │
│   100 │   │   │   │   warnings.warn(message, FutureWarning)                                                          │
│ ❱ 101 │   │   │   return f(*args, **kwargs)                                                                          │
│   102                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:114 in _inner_fn                         │
│                                                                                                                      │
│   113 │   │                                                                                                          │
│ ❱ 114 │   │   return fn(*args, **kwargs)                                                                             │
│   115                                                                                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: hf_hub_download() got an unexpected keyword argument 'low_cpu_mem_usage'
17:59:42-899735 TRACE    Loading FLUX: quantization
                         map="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d51
                         70c306a6eb\text_encoder_2\quantization_map.json" repo="Diffusers\Disty0/FLUX.1-dev-qint4"
                         component="text_encoder_2"
17:59:42-901731 ERROR    Loading FLUX: Failed to load Quanto text encoder: hf_hub_download() got an unexpected keyword
                         argument 'low_cpu_mem_usage'
17:59:42-902727 ERROR    FLUX Quanto:: TypeError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\model_flux.py:75 in load_flux_quanto                                                         │
│                                                                                                                      │
│    74 │   │   │   repo_id = checkpoint_info.name.replace('Diffusers/', '').replace('Diffusers\\', '').replace('model │
│ ❱  75 │   │   │   quantization_map = hf_hub_download(repo_id, subfolder='text_encoder_2', filename='quantization_map │
│    76 │   │   with open(quantization_map, "r", encoding='utf8') as f:                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_deprecation.py:101 in inner_f                          │
│                                                                                                                      │
│   100 │   │   │   │   warnings.warn(message, FutureWarning)                                                          │
│ ❱ 101 │   │   │   return f(*args, **kwargs)                                                                          │
│   102                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:114 in _inner_fn                         │
│                                                                                                                      │
│   113 │   │                                                                                                          │
│ ❱ 114 │   │   return fn(*args, **kwargs)                                                                             │
│   115                                                                                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: hf_hub_download() got an unexpected keyword argument 'low_cpu_mem_usage'
17:59:42-922737 DEBUG    Loading FLUX: preloaded=[]
Loading pipeline components...  14% ━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1/7  [ 0:00:00 < -:--:-- , ? C/s ]
17:59:43-922282 ERROR    Diffusers Failed loading model:
                         models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5170c30
                         6a6eb Cannot load <class
                         'diffusers.models.transformers.transformer_flux.FluxTransformer2DModel'> from
                         models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5170c30
                         6a6eb\transformer because the following keys are missing:
                          transformer_blocks.2.attn.to_k.weight, transformer_blocks.8.norm1.linear.weight,
                         transformer_blocks.5.attn.add_v_proj.weight, transformer_blocks.16.ff_context.net.2.weight,
                         transformer_blocks.0.attn.to_add_out.weight, transformer_blocks.8.attn.to_k.weight,
                         transformer_blocks.5.norm1_context.linear.weight, transformer_blocks.13.attn.add_k_proj.weight,
                         transformer_blocks.0.ff.net.2.weight, single_transformer_blocks.23.norm.linear.weight,
                         single_transformer_blocks.24.norm.linear.weight, transformer_blocks.8.attn.to_q.weight,
                         single_transformer_blocks.28.attn.to_q.weight, single_transformer_blocks.20.norm.linear.weight,
                         single_transformer_blocks.36.proj_out.weight, transformer_blocks.11.attn.add_q_proj.weight,
                         transformer_blocks.12.attn.to_k.weight, single_transformer_blocks.6.proj_mlp.weight,
                         single_transformer_blocks.27.proj_out.weight, transformer_blocks.1.attn.add_k_proj.weight,
                         single_transformer_blocks.2.norm.linear.weight, transformer_blocks.4.attn.add_q_proj.weight,
                         transformer_blocks.12.attn.to_out.0.weight, single_transformer_blocks.32.proj_out.weight,
                         single_transformer_blocks.3.norm.linear.weight, transformer_blocks.5.ff_context.net.2.weight,
                         transformer_blocks.9.attn.to_k.weight, transformer_blocks.12.ff.net.2.weight,
                         single_transformer_blocks.0.attn.to_q.weight,
                         time_text_embed.timestep_embedder.linear_1.weight,
                         transformer_blocks.12.ff_context.net.2.weight, transformer_blocks.5.ff.net.0.proj.weight,
                         single_transformer_blocks.21.attn.to_k.weight, transformer_blocks.0.ff_context.net.2.weight,
                         single_transformer_blocks.23.attn.to_v.weight, transformer_blocks.5.attn.to_k.weight,
                         transformer_blocks.9.ff.net.0.proj.weight, single_transformer_blocks.35.attn.to_v.weight,
                         single_transformer_blocks.25.attn.to_q.weight, transformer_blocks.8.attn.add_q_proj.weight,
                         single_transformer_blocks.0.attn.to_k.weight, transformer_blocks.18.attn.to_q.weight,
                         single_transformer_blocks.2.proj_out.weight, single_transformer_blocks.24.proj_out.weight,
                         transformer_blocks.5.attn.to_out.0.weight, single_transformer_blocks.8.attn.to_k.weight,
                         transformer_blocks.18.attn.add_k_proj.weight, single_transformer_blocks.22.proj_mlp.weight,
                         transformer_blocks.16.ff_context.net.0.proj.weight, transformer_blocks.17.attn.to_k.weight,
                         single_transformer_blocks.18.proj_mlp.weight, transformer_blocks.15.ff.net.2.weight,
                         transformer_blocks.7.ff_context.net.2.weight, single_transformer_blocks.36.attn.to_q.weight,
                         transformer_blocks.13.attn.to_v.weight, single_transformer_blocks.34.attn.to_k.weight,
                         single_transformer_blocks.12.attn.to_v.weight, single_transformer_blocks.22.norm.linear.weight,
                         transformer_blocks.10.ff_context.net.2.weight, transformer_blocks.0.attn.add_k_proj.weight,
                         transformer_blocks.9.attn.to_add_out.weight, single_transformer_blocks.26.proj_mlp.weight,
                         transformer_blocks.6.attn.to_out.0.weight, single_transformer_blocks.13.norm.linear.weight,
                         single_transformer_blocks.11.attn.to_k.weight, transformer_blocks.1.attn.add_v_proj.weight,
                         transformer_blocks.1.attn.to_q.weight, transformer_blocks.15.ff_context.net.2.weight,
                         transformer_blocks.15.attn.to_add_out.weight, single_transformer_blocks.0.proj_out.weight,
                         transformer_blocks.4.attn.add_k_proj.weight, transformer_blocks.3.norm1_context.linear.weight,
                         single_transformer_blocks.3.proj_mlp.weight, single_transformer_blocks.17.attn.to_k.weight,
                         transformer_blocks.5.attn.to_v.weight, single_transformer_blocks.5.norm.linear.weight,
                         single_transformer_blocks.10.norm.linear.weight, single_transformer_blocks.0.proj_mlp.weight,
                         transformer_blocks.9.ff_context.net.0.proj.weight, transformer_blocks.13.ff.net.0.proj.weight,
                         single_transformer_blocks.8.proj_out.weight, transformer_blocks.0.attn.to_q.weight,
                         transformer_blocks.14.ff.net.2.weight, transformer_blocks.1.attn.add_q_proj.weight,
                         transformer_blocks.16.ff.net.0.proj.weight, transformer_blocks.18.norm1.linear.weight,
                         transformer_blocks.1.ff.net.0.proj.weight, transformer_blocks.3.attn.add_k_proj.weight,
                         transformer_blocks.14.attn.to_q.weight, transformer_blocks.9.attn.add_v_proj.weight,
                         single_transformer_blocks.36.attn.to_k.weight, single_transformer_blocks.3.attn.to_k.weight,
                         transformer_blocks.7.attn.add_v_proj.weight, single_transformer_blocks.7.attn.to_q.weight,
                         transformer_blocks.17.ff.net.0.proj.weight, transformer_blocks.4.attn.to_out.0.weight,
                         transformer_blocks.4.attn.to_q.weight, single_transformer_blocks.30.attn.to_q.weight,
                         transformer_blocks.5.ff_context.net.0.proj.weight, transformer_blocks.12.norm1.linear.weight,
                         transformer_blocks.14.attn.to_add_out.weight, single_transformer_blocks.2.attn.to_q.weight,
                         transformer_blocks.6.attn.to_k.weight, transformer_blocks.11.ff_context.net.2.weight,
                         transformer_blocks.16.attn.to_q.weight, transformer_blocks.8.ff.net.2.weight,
                         single_transformer_blocks.23.attn.to_q.weight, norm_out.linear.weight,
                         single_transformer_blocks.7.proj_mlp.weight, transformer_blocks.10.attn.add_v_proj.weight,
                         transformer_blocks.16.attn.to_out.0.weight, single_transformer_blocks.1.attn.to_v.weight,
                         single_transformer_blocks.33.attn.to_q.weight, transformer_blocks.11.attn.to_v.weight,
                         single_transformer_blocks.26.attn.to_q.weight, single_transformer_blocks.9.attn.to_q.weight,
                         transformer_blocks.12.ff_context.net.0.proj.weight,
                         transformer_blocks.5.attn.add_k_proj.weight, transformer_blocks.18.attn.to_out.0.weight,
                         transformer_blocks.6.ff_context.net.2.weight, transformer_blocks.12.attn.add_k_proj.weight,
                         transformer_blocks.15.attn.add_v_proj.weight, single_transformer_blocks.26.attn.to_v.weight,
                         time_text_embed.text_embedder.linear_1.weight,
                         transformer_blocks.18.norm1_context.linear.weight, transformer_blocks.14.attn.to_out.0.weight,
                         single_transformer_blocks.6.attn.to_q.weight, transformer_blocks.17.attn.to_add_out.weight,
                         transformer_blocks.10.attn.to_q.weight, transformer_blocks.17.norm1_context.linear.weight,
                         transformer_blocks.3.ff_context.net.0.proj.weight,
                         single_transformer_blocks.15.proj_out.weight, transformer_blocks.8.attn.to_add_out.weight,
                         transformer_blocks.10.ff_context.net.0.proj.weight,
                         single_transformer_blocks.24.attn.to_k.weight, transformer_blocks.13.ff_context.net.2.weight,
                         transformer_blocks.16.attn.to_v.weight, transformer_blocks.10.attn.add_k_proj.weight,
                         transformer_blocks.7.attn.add_q_proj.weight, single_transformer_blocks.26.norm.linear.weight,
                         transformer_blocks.11.ff_context.net.0.proj.weight,
                         single_transformer_blocks.13.attn.to_v.weight, single_transformer_blocks.16.proj_mlp.weight,
                         single_transformer_blocks.13.proj_mlp.weight, transformer_blocks.14.norm1.linear.weight,
                         single_transformer_blocks.1.norm.linear.weight, single_transformer_blocks.1.proj_mlp.weight,
                         single_transformer_blocks.16.norm.linear.weight, transformer_blocks.17.attn.to_out.0.weight,
                         transformer_blocks.5.attn.to_add_out.weight, transformer_blocks.13.ff.net.2.weight,
                         transformer_blocks.12.ff.net.0.proj.weight, transformer_blocks.3.attn.to_out.0.weight,
                         transformer_blocks.15.attn.to_out.0.weight, transformer_blocks.14.ff_context.net.2.weight,
                         transformer_blocks.15.attn.add_k_proj.weight, transformer_blocks.17.ff_context.net.2.weight,
                         single_transformer_blocks.8.attn.to_v.weight,
                         time_text_embed.timestep_embedder.linear_2.weight,
                         single_transformer_blocks.10.attn.to_q.weight, single_transformer_blocks.12.norm.linear.weight,
                         transformer_blocks.4.attn.to_v.weight, single_transformer_blocks.24.attn.to_q.weight,
                         transformer_blocks.0.norm1_context.linear.weight, transformer_blocks.17.norm1.linear.weight,
                         transformer_blocks.18.attn.to_add_out.weight, single_transformer_blocks.24.attn.to_v.weight,
                         single_transformer_blocks.25.proj_out.weight,
                         transformer_blocks.12.norm1_context.linear.weight,
                         single_transformer_blocks.10.attn.to_k.weight, single_transformer_blocks.15.proj_mlp.weight,
                         single_transformer_blocks.31.attn.to_k.weight, transformer_blocks.5.attn.add_q_proj.weight,
                         transformer_blocks.18.attn.to_k.weight, transformer_blocks.3.ff.net.0.proj.weight,
                         single_transformer_blocks.20.proj_out.weight,
                         time_text_embed.guidance_embedder.linear_2.weight,
                         transformer_blocks.18.attn.add_v_proj.weight, single_transformer_blocks.21.norm.linear.weight,
                         transformer_blocks.17.attn.add_k_proj.weight, transformer_blocks.17.attn.to_v.weight,
                         transformer_blocks.9.ff.net.2.weight, transformer_blocks.17.ff.net.2.weight,
                         single_transformer_blocks.14.proj_mlp.weight, x_embedder.weight,
                         single_transformer_blocks.30.attn.to_k.weight, transformer_blocks.0.attn.add_v_proj.weight,
                         single_transformer_blocks.31.proj_mlp.weight, single_transformer_blocks.32.proj_mlp.weight,
                         transformer_blocks.14.attn.add_k_proj.weight, single_transformer_blocks.16.attn.to_q.weight,
                         single_transformer_blocks.27.attn.to_q.weight, transformer_blocks.15.attn.to_k.weight,
                         single_transformer_blocks.10.proj_out.weight, transformer_blocks.1.ff.net.2.weight,
                         transformer_blocks.7.attn.to_add_out.weight, single_transformer_blocks.8.proj_mlp.weight,
                         single_transformer_blocks.5.attn.to_v.weight, transformer_blocks.14.attn.add_q_proj.weight,
                         transformer_blocks.11.norm1.linear.weight, single_transformer_blocks.31.proj_out.weight,
                         transformer_blocks.2.attn.to_q.weight, transformer_blocks.16.attn.add_k_proj.weight,
                         transformer_blocks.8.ff.net.0.proj.weight, single_transformer_blocks.22.attn.to_v.weight,
                         single_transformer_blocks.17.attn.to_q.weight, transformer_blocks.18.ff.net.0.proj.weight,
                         transformer_blocks.3.attn.to_add_out.weight, single_transformer_blocks.16.attn.to_k.weight,
                         transformer_blocks.17.attn.add_v_proj.weight, single_transformer_blocks.1.attn.to_k.weight,
                         single_transformer_blocks.29.proj_out.weight, transformer_blocks.3.attn.to_k.weight,
                         single_transformer_blocks.21.attn.to_q.weight,
                         transformer_blocks.15.norm1_context.linear.weight,
                         transformer_blocks.13.ff_context.net.0.proj.weight,
                         transformer_blocks.1.norm1_context.linear.weight, transformer_blocks.2.attn.add_q_proj.weight,
                         transformer_blocks.6.norm1.linear.weight, single_transformer_blocks.31.attn.to_v.weight,
                         single_transformer_blocks.17.norm.linear.weight, transformer_blocks.15.norm1.linear.weight,
                         transformer_blocks.15.ff_context.net.0.proj.weight,
                         single_transformer_blocks.37.proj_mlp.weight,
                         transformer_blocks.0.ff_context.net.0.proj.weight, transformer_blocks.12.attn.to_q.weight,
                         single_transformer_blocks.24.proj_mlp.weight, single_transformer_blocks.5.proj_mlp.weight,
                         single_transformer_blocks.35.proj_mlp.weight, single_transformer_blocks.20.attn.to_k.weight,
                         transformer_blocks.13.attn.to_out.0.weight, transformer_blocks.7.norm1.linear.weight,
                         transformer_blocks.3.ff_context.net.2.weight, single_transformer_blocks.22.attn.to_q.weight,
                         transformer_blocks.4.ff_context.net.2.weight, transformer_blocks.17.attn.to_q.weight,
                         transformer_blocks.7.attn.to_out.0.weight, single_transformer_blocks.27.attn.to_k.weight,
                         single_transformer_blocks.5.attn.to_k.weight, transformer_blocks.7.ff.net.2.weight,
                         single_transformer_blocks.14.attn.to_k.weight, transformer_blocks.4.ff.net.0.proj.weight,
                         single_transformer_blocks.29.attn.to_v.weight, transformer_blocks.3.norm1.linear.weight,
                         transformer_blocks.0.ff.net.0.proj.weight, transformer_blocks.1.ff_context.net.2.weight,
                         single_transformer_blocks.15.attn.to_q.weight, single_transformer_blocks.36.proj_mlp.weight,
                         single_transformer_blocks.33.proj_out.weight, transformer_blocks.4.attn.to_k.weight,
                         transformer_blocks.15.attn.add_q_proj.weight, transformer_blocks.4.attn.to_add_out.weight,
                         transformer_blocks.1.attn.to_add_out.weight, single_transformer_blocks.28.attn.to_v.weight,
                         transformer_blocks.11.attn.to_k.weight, transformer_blocks.5.attn.to_q.weight,
                         transformer_blocks.17.attn.add_q_proj.weight, transformer_blocks.10.attn.add_q_proj.weight,
                         transformer_blocks.7.attn.to_k.weight, single_transformer_blocks.17.proj_mlp.weight,
                         single_transformer_blocks.26.attn.to_k.weight, single_transformer_blocks.29.attn.to_k.weight,
                         single_transformer_blocks.34.attn.to_q.weight, single_transformer_blocks.25.proj_mlp.weight,
                         transformer_blocks.1.attn.to_k.weight, single_transformer_blocks.19.attn.to_q.weight,
                         transformer_blocks.0.attn.to_v.weight, transformer_blocks.2.attn.to_v.weight,
                         transformer_blocks.8.ff_context.net.2.weight, transformer_blocks.9.norm1_context.linear.weight,
                         single_transformer_blocks.1.attn.to_q.weight, single_transformer_blocks.19.proj_mlp.weight,
                         transformer_blocks.8.attn.to_out.0.weight, single_transformer_blocks.30.proj_mlp.weight,
                         single_transformer_blocks.28.attn.to_k.weight, single_transformer_blocks.12.proj_out.weight,
                         transformer_blocks.6.attn.to_v.weight, transformer_blocks.2.ff.net.2.weight,
                         single_transformer_blocks.23.proj_mlp.weight, transformer_blocks.12.attn.to_v.weight,
                         single_transformer_blocks.34.norm.linear.weight, transformer_blocks.18.ff.net.2.weight,
                         single_transformer_blocks.29.attn.to_q.weight, single_transformer_blocks.9.proj_mlp.weight,
                         single_transformer_blocks.12.attn.to_q.weight, single_transformer_blocks.7.attn.to_v.weight,
                         single_transformer_blocks.9.proj_out.weight, transformer_blocks.16.attn.to_add_out.weight,
                         transformer_blocks.3.ff.net.2.weight, single_transformer_blocks.6.attn.to_v.weight,
                         transformer_blocks.9.ff_context.net.2.weight, single_transformer_blocks.20.proj_mlp.weight,
                         transformer_blocks.8.norm1_context.linear.weight,
                         single_transformer_blocks.20.attn.to_v.weight, single_transformer_blocks.21.proj_out.weight,
                         transformer_blocks.5.ff.net.2.weight, transformer_blocks.7.attn.to_v.weight,
                         transformer_blocks.10.ff.net.0.proj.weight, transformer_blocks.9.attn.add_q_proj.weight,
                         single_transformer_blocks.13.proj_out.weight, single_transformer_blocks.2.proj_mlp.weight,
                         time_text_embed.guidance_embedder.linear_1.weight, transformer_blocks.11.attn.to_q.weight,
                         transformer_blocks.8.ff_context.net.0.proj.weight, transformer_blocks.9.norm1.linear.weight,
                         single_transformer_blocks.22.attn.to_k.weight,
                         transformer_blocks.6.norm1_context.linear.weight,
                         single_transformer_blocks.15.attn.to_v.weight, single_transformer_blocks.21.attn.to_v.weight,
                         single_transformer_blocks.34.proj_mlp.weight, transformer_blocks.13.attn.to_q.weight,
                         transformer_blocks.4.norm1.linear.weight, transformer_blocks.10.attn.to_v.weight,
                         single_transformer_blocks.4.norm.linear.weight,
                         transformer_blocks.1.ff_context.net.0.proj.weight,
                         single_transformer_blocks.14.proj_out.weight, transformer_blocks.16.attn.to_k.weight,
                         single_transformer_blocks.37.proj_out.weight, transformer_blocks.18.ff_context.net.2.weight,
                         transformer_blocks.0.attn.to_k.weight, transformer_blocks.16.ff.net.2.weight,
                         transformer_blocks.18.ff_context.net.0.proj.weight,
                         transformer_blocks.8.attn.add_k_proj.weight, single_transformer_blocks.18.attn.to_v.weight,
                         single_transformer_blocks.28.proj_out.weight, single_transformer_blocks.5.attn.to_q.weight,
                         single_transformer_blocks.32.attn.to_k.weight, transformer_blocks.6.attn.to_add_out.weight,
                         transformer_blocks.2.norm1_context.linear.weight,
                         single_transformer_blocks.32.norm.linear.weight,
                         single_transformer_blocks.37.norm.linear.weight,
                         single_transformer_blocks.6.norm.linear.weight, single_transformer_blocks.10.proj_mlp.weight,
                         single_transformer_blocks.6.proj_out.weight, single_transformer_blocks.30.attn.to_v.weight,
                         single_transformer_blocks.27.norm.linear.weight, transformer_blocks.2.norm1.linear.weight,
                         transformer_blocks.12.attn.add_v_proj.weight, single_transformer_blocks.36.attn.to_v.weight,
                         single_transformer_blocks.25.norm.linear.weight, single_transformer_blocks.25.attn.to_v.weight,
                         single_transformer_blocks.12.proj_mlp.weight, transformer_blocks.3.attn.add_q_proj.weight,
                         single_transformer_blocks.18.proj_out.weight, transformer_blocks.18.attn.add_q_proj.weight,
                         transformer_blocks.6.attn.add_k_proj.weight, transformer_blocks.18.attn.to_v.weight,
                         single_transformer_blocks.25.attn.to_k.weight, transformer_blocks.1.attn.to_v.weight,
                         single_transformer_blocks.2.attn.to_v.weight, single_transformer_blocks.11.norm.linear.weight,
                         single_transformer_blocks.18.attn.to_k.weight, transformer_blocks.12.attn.to_add_out.weight,
                         transformer_blocks.0.attn.to_out.0.weight, transformer_blocks.6.ff.net.2.weight,
                         single_transformer_blocks.33.proj_mlp.weight, single_transformer_blocks.30.norm.linear.weight,
                         single_transformer_blocks.27.proj_mlp.weight, single_transformer_blocks.35.proj_out.weight,
                         transformer_blocks.10.ff.net.2.weight, single_transformer_blocks.26.proj_out.weight,
                         transformer_blocks.13.attn.to_add_out.weight, single_transformer_blocks.17.proj_out.weight,
                         single_transformer_blocks.19.proj_out.weight, transformer_blocks.4.ff.net.2.weight,
                         single_transformer_blocks.0.norm.linear.weight, single_transformer_blocks.37.attn.to_k.weight,
                         transformer_blocks.13.norm1.linear.weight, single_transformer_blocks.4.attn.to_k.weight,
                         single_transformer_blocks.2.attn.to_k.weight, transformer_blocks.9.attn.add_k_proj.weight,
                         transformer_blocks.2.attn.to_add_out.weight, single_transformer_blocks.37.attn.to_v.weight,
                         single_transformer_blocks.9.norm.linear.weight, single_transformer_blocks.11.attn.to_q.weight,
                         transformer_blocks.11.attn.to_add_out.weight, transformer_blocks.10.norm1.linear.weight,
                         proj_out.weight, transformer_blocks.9.attn.to_v.weight,
                         transformer_blocks.4.ff_context.net.0.proj.weight,
                         single_transformer_blocks.29.norm.linear.weight, transformer_blocks.11.attn.add_k_proj.weight,
                         single_transformer_blocks.35.attn.to_k.weight,
                         transformer_blocks.11.norm1_context.linear.weight, transformer_blocks.15.ff.net.0.proj.weight,
                         transformer_blocks.13.attn.to_k.weight, transformer_blocks.13.norm1_context.linear.weight,
                         single_transformer_blocks.22.proj_out.weight,
                         transformer_blocks.14.norm1_context.linear.weight,
                         single_transformer_blocks.14.attn.to_v.weight, single_transformer_blocks.4.proj_mlp.weight,
                         single_transformer_blocks.1.proj_out.weight, transformer_blocks.3.attn.to_v.weight,
                         single_transformer_blocks.11.proj_out.weight, single_transformer_blocks.21.proj_mlp.weight,
                         transformer_blocks.2.attn.add_v_proj.weight, transformer_blocks.6.attn.add_v_proj.weight,
                         single_transformer_blocks.15.attn.to_k.weight, single_transformer_blocks.29.proj_mlp.weight,
                         single_transformer_blocks.19.norm.linear.weight, transformer_blocks.14.attn.to_k.weight,
                         transformer_blocks.3.attn.to_q.weight, single_transformer_blocks.34.proj_out.weight,
                         single_transformer_blocks.3.proj_out.weight, transformer_blocks.16.norm1.linear.weight,
                         transformer_blocks.14.attn.add_v_proj.weight, transformer_blocks.16.attn.add_v_proj.weight,
                         transformer_blocks.0.norm1.linear.weight, single_transformer_blocks.23.attn.to_k.weight,
                         single_transformer_blocks.10.attn.to_v.weight, transformer_blocks.1.attn.to_out.0.weight,
                         single_transformer_blocks.32.attn.to_q.weight, single_transformer_blocks.16.proj_out.weight,
                         single_transformer_blocks.17.attn.to_v.weight, transformer_blocks.4.attn.add_v_proj.weight,
                         transformer_blocks.10.attn.to_out.0.weight, single_transformer_blocks.8.norm.linear.weight,
                         transformer_blocks.10.attn.to_k.weight, single_transformer_blocks.4.proj_out.weight,
                         single_transformer_blocks.18.attn.to_q.weight,
                         transformer_blocks.16.norm1_context.linear.weight,
                         single_transformer_blocks.7.attn.to_k.weight, transformer_blocks.8.attn.add_v_proj.weight,
                         single_transformer_blocks.31.norm.linear.weight, transformer_blocks.0.attn.add_q_proj.weight,
                         single_transformer_blocks.34.attn.to_v.weight, single_transformer_blocks.36.norm.linear.weight,
                         single_transformer_blocks.7.norm.linear.weight, transformer_blocks.7.attn.to_q.weight,
                         transformer_blocks.2.ff.net.0.proj.weight, transformer_blocks.5.norm1.linear.weight,
                         transformer_blocks.8.attn.to_v.weight, transformer_blocks.4.norm1_context.linear.weight,
                         single_transformer_blocks.15.norm.linear.weight, single_transformer_blocks.30.proj_out.weight,
                         single_transformer_blocks.19.attn.to_k.weight,
                         transformer_blocks.10.norm1_context.linear.weight, transformer_blocks.9.attn.to_q.weight,
                         transformer_blocks.14.attn.to_v.weight, transformer_blocks.6.attn.add_q_proj.weight,
                         single_transformer_blocks.11.proj_mlp.weight, transformer_blocks.13.attn.add_v_proj.weight,
                         single_transformer_blocks.3.attn.to_q.weight, single_transformer_blocks.19.attn.to_v.weight,
                         single_transformer_blocks.7.proj_out.weight, transformer_blocks.6.ff.net.0.proj.weight,
                         transformer_blocks.11.ff.net.0.proj.weight, single_transformer_blocks.14.attn.to_q.weight,
                         single_transformer_blocks.31.attn.to_q.weight, single_transformer_blocks.4.attn.to_v.weight,
                         single_transformer_blocks.0.attn.to_v.weight,
                         transformer_blocks.14.ff_context.net.0.proj.weight,
                         single_transformer_blocks.35.attn.to_q.weight, single_transformer_blocks.13.attn.to_k.weight,
                         transformer_blocks.11.ff.net.2.weight, single_transformer_blocks.6.attn.to_k.weight,
                         transformer_blocks.7.attn.add_k_proj.weight, transformer_blocks.12.attn.add_q_proj.weight,
                         transformer_blocks.15.attn.to_q.weight, single_transformer_blocks.33.attn.to_v.weight,
                         single_transformer_blocks.12.attn.to_k.weight,
                         transformer_blocks.6.ff_context.net.0.proj.weight,
                         transformer_blocks.7.norm1_context.linear.weight,
                         single_transformer_blocks.37.attn.to_q.weight, single_transformer_blocks.9.attn.to_k.weight,
                         transformer_blocks.14.ff.net.0.proj.weight, single_transformer_blocks.11.attn.to_v.weight,
                         transformer_blocks.9.attn.to_out.0.weight, single_transformer_blocks.4.attn.to_q.weight,
                         transformer_blocks.15.attn.to_v.weight, transformer_blocks.17.ff_context.net.0.proj.weight,
                         transformer_blocks.1.norm1.linear.weight, transformer_blocks.6.attn.to_q.weight,
                         transformer_blocks.2.attn.add_k_proj.weight, transformer_blocks.7.ff_context.net.0.proj.weight,
                         single_transformer_blocks.16.attn.to_v.weight, transformer_blocks.10.attn.to_add_out.weight,
                         transformer_blocks.3.attn.add_v_proj.weight, transformer_blocks.13.attn.add_q_proj.weight,
                         transformer_blocks.2.attn.to_out.0.weight, single_transformer_blocks.20.attn.to_q.weight,
                         single_transformer_blocks.13.attn.to_q.weight, transformer_blocks.7.ff.net.0.proj.weight,
                         transformer_blocks.16.attn.add_q_proj.weight, single_transformer_blocks.8.attn.to_q.weight,
                         transformer_blocks.11.attn.add_v_proj.weight, single_transformer_blocks.18.norm.linear.weight,
                         context_embedder.weight, single_transformer_blocks.33.attn.to_k.weight,
                         time_text_embed.text_embedder.linear_2.weight, single_transformer_blocks.9.attn.to_v.weight,
                         single_transformer_blocks.3.attn.to_v.weight, single_transformer_blocks.23.proj_out.weight,
                         single_transformer_blocks.5.proj_out.weight, single_transformer_blocks.35.norm.linear.weight,
                         single_transformer_blocks.14.norm.linear.weight,
                         single_transformer_blocks.33.norm.linear.weight,
                         single_transformer_blocks.28.norm.linear.weight, transformer_blocks.11.attn.to_out.0.weight,
                         transformer_blocks.2.ff_context.net.2.weight,
                         transformer_blocks.2.ff_context.net.0.proj.weight,
                         single_transformer_blocks.28.proj_mlp.weight, single_transformer_blocks.27.attn.to_v.weight,
                         single_transformer_blocks.32.attn.to_v.weight.
                          Please make sure to pass `low_cpu_mem_usage=False` and `device_map=None` if you want to
                         randomly initialize those weights or else make sure your checkpoint file is correct.
17:59:43-971024 ERROR    Load: ValueError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\sd_models.py:1156 in load_diffuser                                                           │
│                                                                                                                      │
│   1155 │   │   │   │   │   from modules.model_flux import load_flux                                                  │
│ ❱ 1156 │   │   │   │   │   sd_model = load_flux(checkpoint_info, diffusers_load_config)                              │
│   1157 │   │   │   │   except Exception as e:                                                                        │
│                                                                                                                      │
│ C:\ai\automatic\modules\model_flux.py:241 in load_flux                                                               │
│                                                                                                                      │
│   240 │   shared.log.debug(f'Loading FLUX: preloaded={list(components)}')                                            │
│ ❱ 241 │   pipe = diffusers.FluxPipeline.from_pretrained(repo_id, cache_dir=shared.opts.diffusers_dir, **components,  │
│   242 │   return pipe                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:114 in _inner_fn                         │
│                                                                                                                      │
│   113 │   │                                                                                                          │
│ ❱ 114 │   │   return fn(*args, **kwargs)                                                                             │
│   115                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py:859 in from_pretrained                  │
│                                                                                                                      │
│    858 │   │   │   │   # load sub model                                                                              │
│ ❱  859 │   │   │   │   loaded_sub_model = load_sub_model(                                                            │
│    860 │   │   │   │   │   library_name=library_name,                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\diffusers\pipelines\pipeline_loading_utils.py:698 in load_sub_model           │
│                                                                                                                      │
│   697 │   if os.path.isdir(os.path.join(cached_folder, name)):                                                       │
│ ❱ 698 │   │   loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)                    │
│   699 │   else:                                                                                                      │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\huggingface_hub\utils\_validators.py:114 in _inner_fn                         │
│                                                                                                                      │
│   113 │   │                                                                                                          │
│ ❱ 114 │   │   return fn(*args, **kwargs)                                                                             │
│   115                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\diffusers\models\modeling_utils.py:740 in from_pretrained                     │
│                                                                                                                      │
│    739 │   │   │   │   │   if len(missing_keys) > 0:                                                                 │
│ ❱  740 │   │   │   │   │   │   raise ValueError(                                                                     │
│    741 │   │   │   │   │   │   │   f"Cannot load {cls} from {pretrained_model_name_or_path} because the following ke │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Cannot load <class 'diffusers.models.transformers.transformer_flux.FluxTransformer2DModel'> from models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5170c306a6eb\transformer because the following keys are missing:
 transformer_blocks.2.attn.to_k.weight, transformer_blocks.8.norm1.linear.weight, transformer_blocks.5.attn.add_v_proj.weight, transformer_blocks.16.ff_context.net.2.weight, transformer_blocks.0.attn.to_add_out.weight, transformer_blocks.8.attn.to_k.weight, transformer_blocks.5.norm1_context.linear.weight, transformer_blocks.13.attn.add_k_proj.weight, transformer_blocks.0.ff.net.2.weight, single_transformer_blocks.23.norm.linear.weight, single_transformer_blocks.24.norm.linear.weight, transformer_blocks.8.attn.to_q.weight, single_transformer_blocks.28.attn.to_q.weight, single_transformer_blocks.20.norm.linear.weight, single_transformer_blocks.36.proj_out.weight, transformer_blocks.11.attn.add_q_proj.weight, transformer_blocks.12.attn.to_k.weight, single_transformer_blocks.6.proj_mlp.weight, single_transformer_blocks.27.proj_out.weight, transformer_blocks.1.attn.add_k_proj.weight, single_transformer_blocks.2.norm.linear.weight, transformer_blocks.4.attn.add_q_proj.weight, transformer_blocks.12.attn.to_out.0.weight, single_transformer_blocks.32.proj_out.weight, single_transformer_blocks.3.norm.linear.weight, transformer_blocks.5.ff_context.net.2.weight, transformer_blocks.9.attn.to_k.weight, transformer_blocks.12.ff.net.2.weight, single_transformer_blocks.0.attn.to_q.weight, time_text_embed.timestep_embedder.linear_1.weight, transformer_blocks.12.ff_context.net.2.weight, transformer_blocks.5.ff.net.0.proj.weight, single_transformer_blocks.21.attn.to_k.weight, transformer_blocks.0.ff_context.net.2.weight, single_transformer_blocks.23.attn.to_v.weight, transformer_blocks.5.attn.to_k.weight, transformer_blocks.9.ff.net.0.proj.weight, single_transformer_blocks.35.attn.to_v.weight, single_transformer_blocks.25.attn.to_q.weight, transformer_blocks.8.attn.add_q_proj.weight, single_transformer_blocks.0.attn.to_k.weight, transformer_blocks.18.attn.to_q.weight, single_transformer_blocks.2.proj_out.weight, single_transformer_blocks.24.proj_out.weight, transformer_blocks.5.attn.to_out.0.weight, single_transformer_blocks.8.attn.to_k.weight, transformer_blocks.18.attn.add_k_proj.weight, single_transformer_blocks.22.proj_mlp.weight, transformer_blocks.16.ff_context.net.0.proj.weight, transformer_blocks.17.attn.to_k.weight, single_transformer_blocks.18.proj_mlp.weight, transformer_blocks.15.ff.net.2.weight, transformer_blocks.7.ff_context.net.2.weight, single_transformer_blocks.36.attn.to_q.weight, transformer_blocks.13.attn.to_v.weight, single_transformer_blocks.34.attn.to_k.weight, single_transformer_blocks.12.attn.to_v.weight, single_transformer_blocks.22.norm.linear.weight, transformer_blocks.10.ff_context.net.2.weight, transformer_blocks.0.attn.add_k_proj.weight, transformer_blocks.9.attn.to_add_out.weight, single_transformer_blocks.26.proj_mlp.weight, transformer_blocks.6.attn.to_out.0.weight, single_transformer_blocks.13.norm.linear.weight, single_transformer_blocks.11.attn.to_k.weight, transformer_blocks.1.attn.add_v_proj.weight, transformer_blocks.1.attn.to_q.weight, transformer_blocks.15.ff_context.net.2.weight, transformer_blocks.15.attn.to_add_out.weight, single_transformer_blocks.0.proj_out.weight, transformer_blocks.4.attn.add_k_proj.weight, transformer_blocks.3.norm1_context.linear.weight, single_transformer_blocks.3.proj_mlp.weight, single_transformer_blocks.17.attn.to_k.weight, transformer_blocks.5.attn.to_v.weight, single_transformer_blocks.5.norm.linear.weight, single_transformer_blocks.10.norm.linear.weight, single_transformer_blocks.0.proj_mlp.weight, transformer_blocks.9.ff_context.net.0.proj.weight, transformer_blocks.13.ff.net.0.proj.weight, single_transformer_blocks.8.proj_out.weight, transformer_blocks.0.attn.to_q.weight, transformer_blocks.14.ff.net.2.weight, transformer_blocks.1.attn.add_q_proj.weight, transformer_blocks.16.ff.net.0.proj.weight, transformer_blocks.18.norm1.linear.weight, transformer_blocks.1.ff.net.0.proj.weight, transformer_blocks.3.attn.add_k_proj.weight, transformer_blocks.14.attn.to_q.weight, transformer_blocks.9.attn.add_v_proj.weight, single_transformer_blocks.36.attn.to_k.weight, single_transformer_blocks.3.attn.to_k.weight, transformer_blocks.7.attn.add_v_proj.weight, single_transformer_blocks.7.attn.to_q.weight, transformer_blocks.17.ff.net.0.proj.weight, transformer_blocks.4.attn.to_out.0.weight, transformer_blocks.4.attn.to_q.weight, single_transformer_blocks.30.attn.to_q.weight, transformer_blocks.5.ff_context.net.0.proj.weight, transformer_blocks.12.norm1.linear.weight, transformer_blocks.14.attn.to_add_out.weight, single_transformer_blocks.2.attn.to_q.weight, transformer_blocks.6.attn.to_k.weight, transformer_blocks.11.ff_context.net.2.weight, transformer_blocks.16.attn.to_q.weight, transformer_blocks.8.ff.net.2.weight, single_transformer_blocks.23.attn.to_q.weight, norm_out.linear.weight, single_transformer_blocks.7.proj_mlp.weight, transformer_blocks.10.attn.add_v_proj.weight, transformer_blocks.16.attn.to_out.0.weight, single_transformer_blocks.1.attn.to_v.weight, single_transformer_blocks.33.attn.to_q.weight, transformer_blocks.11.attn.to_v.weight, single_transformer_blocks.26.attn.to_q.weight, single_transformer_blocks.9.attn.to_q.weight, transformer_blocks.12.ff_context.net.0.proj.weight, transformer_blocks.5.attn.add_k_proj.weight, transformer_blocks.18.attn.to_out.0.weight, transformer_blocks.6.ff_context.net.2.weight, transformer_blocks.12.attn.add_k_proj.weight, transformer_blocks.15.attn.add_v_proj.weight, single_transformer_blocks.26.attn.to_v.weight, time_text_embed.text_embedder.linear_1.weight, transformer_blocks.18.norm1_context.linear.weight, transformer_blocks.14.attn.to_out.0.weight, single_transformer_blocks.6.attn.to_q.weight, transformer_blocks.17.attn.to_add_out.weight, transformer_blocks.10.attn.to_q.weight, transformer_blocks.17.norm1_context.linear.weight, transformer_blocks.3.ff_context.net.0.proj.weight, single_transformer_blocks.15.proj_out.weight, transformer_blocks.8.attn.to_add_out.weight, transformer_blocks.10.ff_context.net.0.proj.weight, single_transformer_blocks.24.attn.to_k.weight, transformer_blocks.13.ff_context.net.2.weight, transformer_blocks.16.attn.to_v.weight, transformer_blocks.10.attn.add_k_proj.weight, transformer_blocks.7.attn.add_q_proj.weight, single_transformer_blocks.26.norm.linear.weight, transformer_blocks.11.ff_context.net.0.proj.weight, single_transformer_blocks.13.attn.to_v.weight, single_transformer_blocks.16.proj_mlp.weight, single_transformer_blocks.13.proj_mlp.weight, transformer_blocks.14.norm1.linear.weight, single_transformer_blocks.1.norm.linear.weight, single_transformer_blocks.1.proj_mlp.weight, single_transformer_blocks.16.norm.linear.weight, transformer_blocks.17.attn.to_out.0.weight, transformer_blocks.5.attn.to_add_out.weight, transformer_blocks.13.ff.net.2.weight, transformer_blocks.12.ff.net.0.proj.weight, transformer_blocks.3.attn.to_out.0.weight, transformer_blocks.15.attn.to_out.0.weight, transformer_blocks.14.ff_context.net.2.weight, transformer_blocks.15.attn.add_k_proj.weight, transformer_blocks.17.ff_context.net.2.weight, single_transformer_blocks.8.attn.to_v.weight, time_text_embed.timestep_embedder.linear_2.weight, single_transformer_blocks.10.attn.to_q.weight, single_transformer_blocks.12.norm.linear.weight, transformer_blocks.4.attn.to_v.weight, single_transformer_blocks.24.attn.to_q.weight, transformer_blocks.0.norm1_context.linear.weight, transformer_blocks.17.norm1.linear.weight, transformer_blocks.18.attn.to_add_out.weight, single_transformer_blocks.24.attn.to_v.weight, single_transformer_blocks.25.proj_out.weight, transformer_blocks.12.norm1_context.linear.weight, single_transformer_blocks.10.attn.to_k.weight, single_transformer_blocks.15.proj_mlp.weight, single_transformer_blocks.31.attn.to_k.weight, transformer_blocks.5.attn.add_q_proj.weight, transformer_blocks.18.attn.to_k.weight, transformer_blocks.3.ff.net.0.proj.weight, single_transformer_blocks.20.proj_out.weight, time_text_embed.guidance_embedder.linear_2.weight, transformer_blocks.18.attn.add_v_proj.weight, single_transformer_blocks.21.norm.linear.weight, transformer_blocks.17.attn.add_k_proj.weight, transformer_blocks.17.attn.to_v.weight, transformer_blocks.9.ff.net.2.weight, transformer_blocks.17.ff.net.2.weight, single_transformer_blocks.14.proj_mlp.weight, x_embedder.weight, single_transformer_blocks.30.attn.to_k.weight, transformer_blocks.0.attn.add_v_proj.weight, single_transformer_blocks.31.proj_mlp.weight, single_transformer_blocks.32.proj_mlp.weight, transformer_blocks.14.attn.add_k_proj.weight, single_transformer_blocks.16.attn.to_q.weight, single_transformer_blocks.27.attn.to_q.weight, transformer_blocks.15.attn.to_k.weight, single_transformer_blocks.10.proj_out.weight, transformer_blocks.1.ff.net.2.weight, transformer_blocks.7.attn.to_add_out.weight, single_transformer_blocks.8.proj_mlp.weight, single_transformer_blocks.5.attn.to_v.weight, transformer_blocks.14.attn.add_q_proj.weight, transformer_blocks.11.norm1.linear.weight, single_transformer_blocks.31.proj_out.weight, transformer_blocks.2.attn.to_q.weight, transformer_blocks.16.attn.add_k_proj.weight, transformer_blocks.8.ff.net.0.proj.weight, single_transformer_blocks.22.attn.to_v.weight, single_transformer_blocks.17.attn.to_q.weight, transformer_blocks.18.ff.net.0.proj.weight, transformer_blocks.3.attn.to_add_out.weight, single_transformer_blocks.16.attn.to_k.weight, transformer_blocks.17.attn.add_v_proj.weight, single_transformer_blocks.1.attn.to_k.weight, single_transformer_blocks.29.proj_out.weight, transformer_blocks.3.attn.to_k.weight, single_transformer_blocks.21.attn.to_q.weight, transformer_blocks.15.norm1_context.linear.weight, transformer_blocks.13.ff_context.net.0.proj.weight, transformer_blocks.1.norm1_context.linear.weight, transformer_blocks.2.attn.add_q_proj.weight, transformer_blocks.6.norm1.linear.weight, single_transformer_blocks.31.attn.to_v.weight, single_transformer_blocks.17.norm.linear.weight, transformer_blocks.15.norm1.linear.weight, transformer_blocks.15.ff_context.net.0.proj.weight, single_transformer_blocks.37.proj_mlp.weight, transformer_blocks.0.ff_context.net.0.proj.weight, transformer_blocks.12.attn.to_q.weight, single_transformer_blocks.24.proj_mlp.weight, single_transformer_blocks.5.proj_mlp.weight, single_transformer_blocks.35.proj_mlp.weight, single_transformer_blocks.20.attn.to_k.weight, transformer_blocks.13.attn.to_out.0.weight, transformer_blocks.7.norm1.linear.weight, transformer_blocks.3.ff_context.net.2.weight, single_transformer_blocks.22.attn.to_q.weight, transformer_blocks.4.ff_context.net.2.weight, transformer_blocks.17.attn.to_q.weight, transformer_blocks.7.attn.to_out.0.weight, single_transformer_blocks.27.attn.to_k.weight, single_transformer_blocks.5.attn.to_k.weight, transformer_blocks.7.ff.net.2.weight, single_transformer_blocks.14.attn.to_k.weight, transformer_blocks.4.ff.net.0.proj.weight, single_transformer_blocks.29.attn.to_v.weight, transformer_blocks.3.norm1.linear.weight, transformer_blocks.0.ff.net.0.proj.weight, transformer_blocks.1.ff_context.net.2.weight, single_transformer_blocks.15.attn.to_q.weight, single_transformer_blocks.36.proj_mlp.weight, single_transformer_blocks.33.proj_out.weight, transformer_blocks.4.attn.to_k.weight, transformer_blocks.15.attn.add_q_proj.weight, transformer_blocks.4.attn.to_add_out.weight, transformer_blocks.1.attn.to_add_out.weight, single_transformer_blocks.28.attn.to_v.weight, transformer_blocks.11.attn.to_k.weight, transformer_blocks.5.attn.to_q.weight, transformer_blocks.17.attn.add_q_proj.weight, transformer_blocks.10.attn.add_q_proj.weight, transformer_blocks.7.attn.to_k.weight, single_transformer_blocks.17.proj_mlp.weight, single_transformer_blocks.26.attn.to_k.weight, single_transformer_blocks.29.attn.to_k.weight, single_transformer_blocks.34.attn.to_q.weight, single_transformer_blocks.25.proj_mlp.weight, transformer_blocks.1.attn.to_k.weight, single_transformer_blocks.19.attn.to_q.weight, transformer_blocks.0.attn.to_v.weight, transformer_blocks.2.attn.to_v.weight, transformer_blocks.8.ff_context.net.2.weight, transformer_blocks.9.norm1_context.linear.weight, single_transformer_blocks.1.attn.to_q.weight, single_transformer_blocks.19.proj_mlp.weight, transformer_blocks.8.attn.to_out.0.weight, single_transformer_blocks.30.proj_mlp.weight, single_transformer_blocks.28.attn.to_k.weight, single_transformer_blocks.12.proj_out.weight, transformer_blocks.6.attn.to_v.weight, transformer_blocks.2.ff.net.2.weight, single_transformer_blocks.23.proj_mlp.weight, transformer_blocks.12.attn.to_v.weight, single_transformer_blocks.34.norm.linear.weight, transformer_blocks.18.ff.net.2.weight, single_transformer_blocks.29.attn.to_q.weight, single_transformer_blocks.9.proj_mlp.weight, single_transformer_blocks.12.attn.to_q.weight, single_transformer_blocks.7.attn.to_v.weight, single_transformer_blocks.9.proj_out.weight, transformer_blocks.16.attn.to_add_out.weight, transformer_blocks.3.ff.net.2.weight, single_transformer_blocks.6.attn.to_v.weight, transformer_blocks.9.ff_context.net.2.weight, single_transformer_blocks.20.proj_mlp.weight, transformer_blocks.8.norm1_context.linear.weight, single_transformer_blocks.20.attn.to_v.weight, single_transformer_blocks.21.proj_out.weight, transformer_blocks.5.ff.net.2.weight, transformer_blocks.7.attn.to_v.weight, transformer_blocks.10.ff.net.0.proj.weight, transformer_blocks.9.attn.add_q_proj.weight, single_transformer_blocks.13.proj_out.weight, single_transformer_blocks.2.proj_mlp.weight, time_text_embed.guidance_embedder.linear_1.weight, transformer_blocks.11.attn.to_q.weight, transformer_blocks.8.ff_context.net.0.proj.weight, transformer_blocks.9.norm1.linear.weight, single_transformer_blocks.22.attn.to_k.weight, transformer_blocks.6.norm1_context.linear.weight, single_transformer_blocks.15.attn.to_v.weight, single_transformer_blocks.21.attn.to_v.weight, single_transformer_blocks.34.proj_mlp.weight, transformer_blocks.13.attn.to_q.weight, transformer_blocks.4.norm1.linear.weight, transformer_blocks.10.attn.to_v.weight, single_transformer_blocks.4.norm.linear.weight, transformer_blocks.1.ff_context.net.0.proj.weight, single_transformer_blocks.14.proj_out.weight, transformer_blocks.16.attn.to_k.weight, single_transformer_blocks.37.proj_out.weight, transformer_blocks.18.ff_context.net.2.weight, transformer_blocks.0.attn.to_k.weight, transformer_blocks.16.ff.net.2.weight, transformer_blocks.18.ff_context.net.0.proj.weight, transformer_blocks.8.attn.add_k_proj.weight, single_transformer_blocks.18.attn.to_v.weight, single_transformer_blocks.28.proj_out.weight, single_transformer_blocks.5.attn.to_q.weight, single_transformer_blocks.32.attn.to_k.weight, transformer_blocks.6.attn.to_add_out.weight, transformer_blocks.2.norm1_context.linear.weight, single_transformer_blocks.32.norm.linear.weight, single_transformer_blocks.37.norm.linear.weight, single_transformer_blocks.6.norm.linear.weight, single_transformer_blocks.10.proj_mlp.weight, single_transformer_blocks.6.proj_out.weight, single_transformer_blocks.30.attn.to_v.weight, single_transformer_blocks.27.norm.linear.weight, transformer_blocks.2.norm1.linear.weight, transformer_blocks.12.attn.add_v_proj.weight, single_transformer_blocks.36.attn.to_v.weight, single_transformer_blocks.25.norm.linear.weight, single_transformer_blocks.25.attn.to_v.weight, single_transformer_blocks.12.proj_mlp.weight, transformer_blocks.3.attn.add_q_proj.weight, single_transformer_blocks.18.proj_out.weight, transformer_blocks.18.attn.add_q_proj.weight, transformer_blocks.6.attn.add_k_proj.weight, transformer_blocks.18.attn.to_v.weight, single_transformer_blocks.25.attn.to_k.weight, transformer_blocks.1.attn.to_v.weight, single_transformer_blocks.2.attn.to_v.weight, single_transformer_blocks.11.norm.linear.weight, single_transformer_blocks.18.attn.to_k.weight, transformer_blocks.12.attn.to_add_out.weight, transformer_blocks.0.attn.to_out.0.weight, transformer_blocks.6.ff.net.2.weight, single_transformer_blocks.33.proj_mlp.weight, single_transformer_blocks.30.norm.linear.weight, single_transformer_blocks.27.proj_mlp.weight, single_transformer_blocks.35.proj_out.weight, transformer_blocks.10.ff.net.2.weight, single_transformer_blocks.26.proj_out.weight, transformer_blocks.13.attn.to_add_out.weight, single_transformer_blocks.17.proj_out.weight, single_transformer_blocks.19.proj_out.weight, transformer_blocks.4.ff.net.2.weight, single_transformer_blocks.0.norm.linear.weight, single_transformer_blocks.37.attn.to_k.weight, transformer_blocks.13.norm1.linear.weight, single_transformer_blocks.4.attn.to_k.weight, single_transformer_blocks.2.attn.to_k.weight, transformer_blocks.9.attn.add_k_proj.weight, transformer_blocks.2.attn.to_add_out.weight, single_transformer_blocks.37.attn.to_v.weight, single_transformer_blocks.9.norm.linear.weight, single_transformer_blocks.11.attn.to_q.weight, transformer_blocks.11.attn.to_add_out.weight, transformer_blocks.10.norm1.linear.weight, proj_out.weight, transformer_blocks.9.attn.to_v.weight, transformer_blocks.4.ff_context.net.0.proj.weight, single_transformer_blocks.29.norm.linear.weight, transformer_blocks.11.attn.add_k_proj.weight, single_transformer_blocks.35.attn.to_k.weight, transformer_blocks.11.norm1_context.linear.weight, transformer_blocks.15.ff.net.0.proj.weight, transformer_blocks.13.attn.to_k.weight, transformer_blocks.13.norm1_context.linear.weight, single_transformer_blocks.22.proj_out.weight, transformer_blocks.14.norm1_context.linear.weight, single_transformer_blocks.14.attn.to_v.weight, single_transformer_blocks.4.proj_mlp.weight, single_transformer_blocks.1.proj_out.weight, transformer_blocks.3.attn.to_v.weight, single_transformer_blocks.11.proj_out.weight, single_transformer_blocks.21.proj_mlp.weight, transformer_blocks.2.attn.add_v_proj.weight, transformer_blocks.6.attn.add_v_proj.weight, single_transformer_blocks.15.attn.to_k.weight, single_transformer_blocks.29.proj_mlp.weight, single_transformer_blocks.19.norm.linear.weight, transformer_blocks.14.attn.to_k.weight, transformer_blocks.3.attn.to_q.weight, single_transformer_blocks.34.proj_out.weight, single_transformer_blocks.3.proj_out.weight, transformer_blocks.16.norm1.linear.weight, transformer_blocks.14.attn.add_v_proj.weight, transformer_blocks.16.attn.add_v_proj.weight, transformer_blocks.0.norm1.linear.weight, single_transformer_blocks.23.attn.to_k.weight, single_transformer_blocks.10.attn.to_v.weight, transformer_blocks.1.attn.to_out.0.weight, single_transformer_blocks.32.attn.to_q.weight, single_transformer_blocks.16.proj_out.weight, single_transformer_blocks.17.attn.to_v.weight, transformer_blocks.4.attn.add_v_proj.weight, transformer_blocks.10.attn.to_out.0.weight, single_transformer_blocks.8.norm.linear.weight, transformer_blocks.10.attn.to_k.weight, single_transformer_blocks.4.proj_out.weight, single_transformer_blocks.18.attn.to_q.weight, transformer_blocks.16.norm1_context.linear.weight, single_transformer_blocks.7.attn.to_k.weight, transformer_blocks.8.attn.add_v_proj.weight, single_transformer_blocks.31.norm.linear.weight, transformer_blocks.0.attn.add_q_proj.weight, single_transformer_blocks.34.attn.to_v.weight, single_transformer_blocks.36.norm.linear.weight, single_transformer_blocks.7.norm.linear.weight, transformer_blocks.7.attn.to_q.weight, transformer_blocks.2.ff.net.0.proj.weight, transformer_blocks.5.norm1.linear.weight, transformer_blocks.8.attn.to_v.weight, transformer_blocks.4.norm1_context.linear.weight, single_transformer_blocks.15.norm.linear.weight, single_transformer_blocks.30.proj_out.weight, single_transformer_blocks.19.attn.to_k.weight, transformer_blocks.10.norm1_context.linear.weight, transformer_blocks.9.attn.to_q.weight, transformer_blocks.14.attn.to_v.weight, transformer_blocks.6.attn.add_q_proj.weight, single_transformer_blocks.11.proj_mlp.weight, transformer_blocks.13.attn.add_v_proj.weight, single_transformer_blocks.3.attn.to_q.weight, single_transformer_blocks.19.attn.to_v.weight, single_transformer_blocks.7.proj_out.weight, transformer_blocks.6.ff.net.0.proj.weight, transformer_blocks.11.ff.net.0.proj.weight, single_transformer_blocks.14.attn.to_q.weight, single_transformer_blocks.31.attn.to_q.weight, single_transformer_blocks.4.attn.to_v.weight, single_transformer_blocks.0.attn.to_v.weight, transformer_blocks.14.ff_context.net.0.proj.weight, single_transformer_blocks.35.attn.to_q.weight, single_transformer_blocks.13.attn.to_k.weight, transformer_blocks.11.ff.net.2.weight, single_transformer_blocks.6.attn.to_k.weight, transformer_blocks.7.attn.add_k_proj.weight, transformer_blocks.12.attn.add_q_proj.weight, transformer_blocks.15.attn.to_q.weight, single_transformer_blocks.33.attn.to_v.weight, single_transformer_blocks.12.attn.to_k.weight, transformer_blocks.6.ff_context.net.0.proj.weight, transformer_blocks.7.norm1_context.linear.weight, single_transformer_blocks.37.attn.to_q.weight, single_transformer_blocks.9.attn.to_k.weight, transformer_blocks.14.ff.net.0.proj.weight, single_transformer_blocks.11.attn.to_v.weight, transformer_blocks.9.attn.to_out.0.weight, single_transformer_blocks.4.attn.to_q.weight, transformer_blocks.15.attn.to_v.weight, transformer_blocks.17.ff_context.net.0.proj.weight, transformer_blocks.1.norm1.linear.weight, transformer_blocks.6.attn.to_q.weight, transformer_blocks.2.attn.add_k_proj.weight, transformer_blocks.7.ff_context.net.0.proj.weight, single_transformer_blocks.16.attn.to_v.weight, transformer_blocks.10.attn.to_add_out.weight, transformer_blocks.3.attn.add_v_proj.weight, transformer_blocks.13.attn.add_q_proj.weight, transformer_blocks.2.attn.to_out.0.weight, single_transformer_blocks.20.attn.to_q.weight, single_transformer_blocks.13.attn.to_q.weight, transformer_blocks.7.ff.net.0.proj.weight, transformer_blocks.16.attn.add_q_proj.weight, single_transformer_blocks.8.attn.to_q.weight, transformer_blocks.11.attn.add_v_proj.weight, single_transformer_blocks.18.norm.linear.weight, context_embedder.weight, single_transformer_blocks.33.attn.to_k.weight, time_text_embed.text_embedder.linear_2.weight, single_transformer_blocks.9.attn.to_v.weight, single_transformer_blocks.3.attn.to_v.weight, single_transformer_blocks.23.proj_out.weight, single_transformer_blocks.5.proj_out.weight, single_transformer_blocks.35.norm.linear.weight, single_transformer_blocks.14.norm.linear.weight, single_transformer_blocks.33.norm.linear.weight, single_transformer_blocks.28.norm.linear.weight, transformer_blocks.11.attn.to_out.0.weight, transformer_blocks.2.ff_context.net.2.weight, transformer_blocks.2.ff_context.net.0.proj.weight, single_transformer_blocks.28.proj_mlp.weight, single_transformer_blocks.27.attn.to_v.weight, single_transformer_blocks.32.attn.to_v.weight.
 Please make sure to pass `low_cpu_mem_usage=False` and `device_map=None` if you want to randomly initialize those weights or else make sure your checkpoint file is correct.
17:59:44-270221 DEBUG    Script callback init time: image_browser.py:ui_tabs=0.40 system-info.py:app_started=0.06
                         task_scheduler.py:app_started=0.13
17:59:44-271218 INFO     Startup time: 12.97 torch=4.12 gradio=1.28 diffusers=0.41 libraries=1.66 extensions=0.62
                         face-restore=0.20 ui-en=0.21 ui-txt2img=0.06 ui-img2img=0.21 ui-control=0.11 ui-settings=0.23
                         ui-extensions=1.02 ui-defaults=0.26 launch=0.13 api=0.09 app-started=0.19 checkpoint=1.95
17:59:44-273212 DEBUG    Save: file="config.json" json=35 bytes=1470 time=0.003
17:59:44-275207 DEBUG    Unused settings: ['cross_attention_options']
18:00:00-280156 DEBUG    Server: alive=True jobs=1 requests=3 uptime=23 memory=1.02/63.92 backend=Backend.DIFFUSERS
                         state=idle
vladmandic commented 2 months ago

ahhh, making (slow) progress. updated.

btw, no idea what this is about?

github-staff deleted a comment from SAC020

SAC020 commented 2 months ago

ahhh, making (slow) progress. updated.

btw, no idea what this is about?

github-staff deleted a comment from SAC020

Progress indeed, but still not working yet.

I have no idea what comment they've deleted, I haven't made any further comment

c:\ai\automatic>.\webui.bat --medvram --debug
Using VENV: c:\ai\automatic\venv
20:01:57-218835 INFO     Starting SD.Next
20:01:57-221827 INFO     Logger: file="c:\ai\automatic\sdnext.log" level=DEBUG size=65 mode=create
20:01:57-222825 INFO     Python version=3.11.9 platform=Windows bin="c:\ai\automatic\venv\Scripts\python.exe"
                         venv="c:\ai\automatic\venv"
20:01:57-433262 INFO     Version: app=sd.next updated=2024-09-04 hash=db6a52a7 branch=dev
                         url=https://github.com/vladmandic/automatic/tree/dev ui=dev
20:01:58-409638 INFO     Latest published version: bab17a0b4f91b41c885f10262ef8c8e70ba72faa 2024-08-31T20:57:34Z
20:01:58-423600 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows
                         release=Windows-10-10.0.22631-SP0 python=3.11.9
20:01:58-426592 DEBUG    Setting environment tuning
20:01:58-427590 INFO     HF cache folder: C:\Users\sebas\.cache\huggingface\hub
20:01:58-429585 DEBUG    Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512"
20:01:58-442550 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
20:01:58-443547 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
20:01:58-454549 INFO     nVidia CUDA toolkit detected: nvidia-smi present
20:01:58-562230 WARNING  Modified files: ['models/Reference/playgroundai--playground-v2-1024px-aesthetic.jpg']
20:01:58-664955 INFO     Verifying requirements
20:01:58-668944 INFO     Verifying packages
20:01:58-715819 DEBUG    Repository update time: Wed Sep  4 19:32:57 2024
20:01:58-716818 INFO     Startup: standard
20:01:58-717816 INFO     Verifying submodules
20:02:02-075255 DEBUG    Git detached head detected: folder="extensions-builtin/sd-extension-chainner" reattach=main
20:02:02-077279 DEBUG    Submodule: extensions-builtin/sd-extension-chainner / main
20:02:02-209895 DEBUG    Git detached head detected: folder="extensions-builtin/sd-extension-system-info" reattach=main
20:02:02-210893 DEBUG    Submodule: extensions-builtin/sd-extension-system-info / main
20:02:02-354509 DEBUG    Git detached head detected: folder="extensions-builtin/sd-webui-agent-scheduler" reattach=main
20:02:02-355506 DEBUG    Submodule: extensions-builtin/sd-webui-agent-scheduler / main
20:02:02-535277 DEBUG    Git detached head detected: folder="extensions-builtin/sdnext-modernui" reattach=dev
20:02:02-536274 DEBUG    Submodule: extensions-builtin/sdnext-modernui / dev
20:02:02-693852 DEBUG    Git detached head detected: folder="extensions-builtin/stable-diffusion-webui-rembg"
                         reattach=master
20:02:02-694849 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
20:02:02-823506 DEBUG    Git detached head detected: folder="modules/k-diffusion" reattach=master
20:02:02-824503 DEBUG    Submodule: modules/k-diffusion / master
20:02:02-956152 DEBUG    Git detached head detected: folder="wiki" reattach=master
20:02:02-957149 DEBUG    Submodule: wiki / master
20:02:03-122631 DEBUG    Register paths
20:02:03-218375 DEBUG    Installed packages: 209
20:02:03-220370 DEBUG    Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
20:02:03-416844 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\sd-extension-system-info\install.py
20:02:03-859661 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
20:02:04-278540 DEBUG    Running extension installer: C:\ai\automatic\extensions-builtin\sd-webui-controlnet\install.py
20:02:04-874946 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py
20:02:05-314770 DEBUG    Running extension installer:
                         C:\ai\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
20:02:05-757586 DEBUG    Extensions all: []
20:02:05-758584 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'sdnext-modernui',
                         'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
20:02:05-760579 INFO     Verifying requirements
20:02:05-761576 DEBUG    Setup complete without errors: 1725469326
20:02:05-775539 DEBUG    Extension preload: {'extensions-builtin': 0.01, 'extensions': 0.0}
20:02:05-777534 DEBUG    Starting module: <module 'webui' from 'c:\\ai\\automatic\\webui.py'>
20:02:05-778532 INFO     Command line args: ['--medvram', '--debug'] medvram=True debug=True
20:02:05-780526 DEBUG    Env flags: ['SD_LOAD_DEBUG=true']
20:02:17-357715 INFO     Load packages: {'torch': '2.4.0+cu124', 'diffusers': '0.31.0.dev0', 'gradio': '3.43.2'}
20:02:18-996830 DEBUG    Read: file="config.json" json=35 bytes=1517 time=0.000
20:02:18-998568 DEBUG    Unknown settings: ['cross_attention_options']
20:02:19-001904 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product"
                         mode=no_grad
20:02:19-055515 INFO     Device: device=NVIDIA GeForce RTX 4080 n=1 arch=sm_90 cap=(8, 9) cuda=12.4 cudnn=90100
                         driver=560.81
20:02:19-058507 DEBUG    Read: file="html\reference.json" json=52 bytes=29118 time=0.001
20:02:19-987373 DEBUG    ONNX: version=1.19.0 provider=CUDAExecutionProvider, available=['AzureExecutionProvider',
                         'CPUExecutionProvider']
20:02:20-350784 DEBUG    Importing LDM
20:02:20-384132 DEBUG    Entering start sequence
20:02:20-387596 DEBUG    Initializing
20:02:20-444498 INFO     Available VAEs: path="models\VAE" items=0
20:02:20-446465 DEBUG    Available UNets: path="models\UNET" items=0
20:02:20-448082 DEBUG    Available T5s: path="models\T5" items=0
20:02:20-449079 INFO     Disabled extensions: ['sd-webui-controlnet', 'sdnext-modernui']
20:02:20-452072 DEBUG    Read: file="cache.json" json=2 bytes=10089 time=0.001
20:02:20-459052 DEBUG    Read: file="metadata.json" json=559 bytes=1861768 time=0.006
20:02:20-465125 DEBUG    Scanning diffusers cache: folder=models\Diffusers items=2 time=0.00
20:02:20-467002 INFO     Available models: path="models\Stable-diffusion" items=21 time=0.02
20:02:21-156128 DEBUG    Load extensions
20:02:21-253129 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py'
                         [2;36m20:02:21-247351[0m[2;36m [0m[34mINFO    [0m LoRA networks: [33mavailable[0m=[1;36m70[0m
                         [33mfolders[0m=[1;36m2[0m
20:02:22-040634 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
20:02:22-313218 DEBUG    Extensions init time: 1.16 sd-extension-chainner=0.07 sd-webui-agent-scheduler=0.70
                         stable-diffusion-webui-images-browser=0.25
20:02:22-345963 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.000
20:02:22-347236 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.000
20:02:22-350230 DEBUG    chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=8
20:02:22-352225 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="1x-ITF-SkinDiffDetail-Lite-v1"
                         path="models\ESRGAN\1x-ITF-SkinDiffDetail-Lite-v1.pth"
20:02:22-353222 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4xNMKDSuperscale_4xNMKDSuperscale"
                         path="models\ESRGAN\4xNMKDSuperscale_4xNMKDSuperscale.pth"
20:02:22-354219 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4x_NMKD-Siax_200k"
                         path="models\ESRGAN\4x_NMKD-Siax_200k.pth"
20:02:22-358208 DEBUG    Load upscalers: total=56 downloaded=11 user=3 time=0.04 ['None', 'Lanczos', 'Nearest',
                         'ChaiNNer', 'AuraSR', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
20:02:22-375395 DEBUG    Load styles: folder="models\styles" items=288 time=0.02
20:02:22-381407 DEBUG    Creating UI
20:02:22-382451 DEBUG    UI themes available: type=Standard themes=12
20:02:22-384449 INFO     UI theme: type=Standard name="black-teal"
20:02:22-391430 DEBUG    UI theme: css="C:\ai\automatic\javascript\black-teal.css" base="sdnext.css" user="None"
20:02:22-395007 DEBUG    UI initialize: txt2img
20:02:22-455846 DEBUG    Networks: page='model' items=72 subfolders=2 tab=txt2img folders=['models\\Stable-diffusion',
                         'models\\Diffusers', 'models\\Reference'] list=0.04 thumb=0.01 desc=0.00 info=0.00 workers=4
                         sort=Default
20:02:22-464822 DEBUG    Networks: page='lora' items=70 subfolders=0 tab=txt2img folders=['models\\Lora',
                         'models\\LyCORIS'] list=0.04 thumb=0.01 desc=0.02 info=0.03 workers=4 sort=Default
20:02:22-495740 DEBUG    Networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles', 'html']
                         list=0.04 thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
20:02:22-500263 DEBUG    Networks: page='embedding' items=13 subfolders=0 tab=txt2img folders=['models\\embeddings']
                         list=0.03 thumb=0.01 desc=0.00 info=0.00 workers=4 sort=Default
20:02:22-502258 DEBUG    Networks: page='vae' items=0 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.00
                         thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
20:02:22-584040 DEBUG    UI initialize: img2img
20:02:22-841086 DEBUG    UI initialize: control models=models\control
20:02:23-117801 DEBUG    Read: file="ui-config.json" json=0 bytes=2 time=0.000
20:02:23-220723 DEBUG    UI themes available: type=Standard themes=12
20:02:23-788205 DEBUG    Reading failed: C:\ai\automatic\html\extensions.json [Errno 2] No such file or directory:
                         'C:\\ai\\automatic\\html\\extensions.json'
20:02:23-790756 INFO     Extension list is empty: refresh required
20:02:24-380172 DEBUG    Extension list: processed=8 installed=8 enabled=6 disabled=2 visible=8 hidden=0
20:02:24-733069 DEBUG    Root paths: ['c:\\ai\\automatic']
20:02:24-843958 INFO     Local URL: http://127.0.0.1:7860/
20:02:24-844955 DEBUG    Gradio functions: registered=2366
20:02:24-847946 DEBUG    FastAPI middleware: ['Middleware', 'Middleware']
20:02:24-850939 DEBUG    Creating API
20:02:25-018490 INFO     [AgentScheduler] Task queue is empty
20:02:25-020487 INFO     [AgentScheduler] Registering APIs
20:02:25-146150 DEBUG    Scripts setup: ['IP Adapters:0.022', 'AnimateDiff:0.007', 'CogVideoX:0.006', 'X/Y/Z
                         Grid:0.169', 'Face:0.012', 'Image-to-Video:0.007']
20:02:25-147147 DEBUG    Model metadata: file="metadata.json" no changes
20:02:25-148145 DEBUG    Torch mode: deterministic=False
20:02:25-176069 INFO     Torch override VAE dtype: no-half set
20:02:25-177068 DEBUG    Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=True upscast=False
20:02:25-178065 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float32 unet=torch.float16
                         context=no_grad fp16=True bf16=None optimization=Scaled-Dot-Product
20:02:25-180059 DEBUG    Model requested: fn=<lambda>
20:02:25-181057 INFO     Select: model="Diffusers\Disty0/FLUX.1-dev-qint4 [82811df42b]"
20:02:25-182054 DEBUG    Load model: existing=False
                         target=models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d
                         5170c306a6eb info=None
20:02:25-183051 DEBUG    Diffusers loading:
                         path="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb"
20:02:25-185045 INFO     Autodetect: model="FLUX" class=FluxPipeline
                         file="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d5
                         170c306a6eb" size=0MB
20:02:25-194021 DEBUG    Loading FLUX: model="Diffusers\Disty0/FLUX.1-dev-qint4" repo="Disty0/FLUX.1-dev-qint4"
                         unet="None" t5="None" vae="None" quant=qint4 offload=model dtype=torch.float16
20:02:25-195018 TRACE    Loading FLUX: config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16,
                         'load_connected_pipeline': True, 'safety_checker': None, 'requires_safety_checker': False}
20:02:25-713291 TRACE    Loading FLUX: quantization
                         map="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d51
                         70c306a6eb\transformer\quantization_map.json" repo="Diffusers\Disty0/FLUX.1-dev-qint4"
                         component="transformer"
transformer/quantization_map.json: 100%|███████████████████████████████████████████| 42.9k/42.9k [00:00<00:00, 345kB/s]
20:03:58-111204 ERROR    Loading FLUX: Failed to cast transformer to torch.float16, set dtype to torch.bfloat16
20:03:58-112201 TRACE    Loading FLUX: quantization
                         map="models\Diffusers\models--Disty0--FLUX.1-dev-qint4\snapshots\82811df42b556a1153b971d8375d51
                         70c306a6eb\text_encoder_2\quantization_map.json" repo="Diffusers\Disty0/FLUX.1-dev-qint4"
                         component="text_encoder_2"
text_encoder_2/quantization_map.json: 100%|███████████████████████████████████████████████| 15.1k/15.1k [00:00<?, ?B/s]
20:04:38-563559 ERROR    Loading FLUX: Failed to cast text encoder to torch.float16, set dtype to torch.float16
20:04:38-565551 DEBUG    Loading FLUX: preloaded=['transformer', 'text_encoder_2']
Loading pipeline components... 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7/7  [ 0:00:00 < 0:00:00 , 4 C/s ]
20:04:40-130008 INFO     Load embeddings: loaded=0 skipped=13 time=0.02
20:04:40-207139 DEBUG    Setting model VAE: no-half=True
20:04:40-208135 DEBUG    Setting model: slicing=True
20:04:40-210129 DEBUG    Setting model: tiling=True
20:04:40-210129 DEBUG    Setting model: attention=Scaled-Dot-Product
20:04:40-229080 DEBUG    Setting model: offload=model
20:04:40-752365 DEBUG    GC: utilization={'gpu': 8, 'ram': 31, 'threshold': 80} gc={'collected': 561, 'saved': 0.0}
                         before={'gpu': 1.33, 'ram': 19.59} after={'gpu': 1.33, 'ram': 19.59, 'retries': 0, 'oom': 0}
                         device=cuda fn=load_diffuser time=0.21
20:04:40-754332 INFO     Load model: time=135.36 load=134.92 options=0.10 move=0.31 native=1024 {'ram': {'used': 19.59,
                         'total': 63.92}, 'gpu': {'used': 1.33, 'total': 15.99}, 'retries': 0, 'oom': 0}
20:04:40-757324 DEBUG    Script callback init time: image_browser.py:ui_tabs=0.42 system-info.py:app_started=0.06
                         task_scheduler.py:app_started=0.14
20:04:40-759319 INFO     Startup time: 154.97 torch=7.64 gradio=2.20 diffusers=1.73 libraries=2.99 samplers=0.06
                         extensions=1.16 face-restore=0.69 ui-en=0.22 ui-txt2img=0.06 ui-img2img=0.22 ui-control=0.12
                         ui-settings=0.24 ui-extensions=1.05 ui-defaults=0.27 launch=0.17 api=0.09 app-started=0.20
                         checkpoint=135.61
20:04:40-760316 DEBUG    Save: file="config.json" json=35 bytes=1408 time=0.003
20:04:40-763307 DEBUG    Unused settings: ['cross_attention_options']
20:04:52-713683 INFO     MOTD: N/A
20:04:55-273269 DEBUG    UI themes available: type=Standard themes=12
20:04:55-487899 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)
                         AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36 Edg/128.0.0.0
20:05:12-395501 INFO     Base: class=FluxPipeline
20:05:12-397496 DEBUG    Sampler default FlowMatchEulerDiscreteScheduler: {'num_train_timesteps': 1000, 'shift': 3.0,
                         'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256,
                         'max_image_seq_len': 4096}
20:05:12-416445 DEBUG    Torch generator: device=cuda seeds=[2753494145]
20:05:12-417443 DEBUG    Diffuser pipeline: FluxPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt':
                         1, 'guidance_scale': 6, 'num_inference_steps': 20, 'output_type': 'latent', 'width': 1024,
                         'height': 1024, 'parser': 'Fixed attention'}
Progress ?it/s                                              0% 0/20 00:03 ? Base
20:05:18-541098 ERROR    Processing: args={'prompt': ['photo of a woman'], 'guidance_scale': 6, 'generator':
                         [<torch._C.Generator object at 0x000001D50D2C9C70>], 'callback_on_step_end': <function
                         diffusers_callback at 0x000001D50D84F880>, 'callback_on_step_end_tensor_inputs': ['latents'],
                         'num_inference_steps': 20, 'output_type': 'latent', 'width': 1024, 'height': 1024} expected
                         mat1 and mat2 to have the same dtype, but got: struct c10::Half != struct c10::BFloat16
20:05:18-543095 ERROR    Processing: RuntimeError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ C:\ai\automatic\modules\processing_diffusers.py:122 in process_diffusers                                             │
│                                                                                                                      │
│   121 │   │   else:                                                                                                  │
│ ❱ 122 │   │   │   output = shared.sd_model(**base_args)                                                              │
│   123 │   │   if isinstance(output, dict):                                                                           │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\torch\utils\_contextlib.py:116 in decorate_context                            │
│                                                                                                                      │
│   115 │   │   with ctx_factory():                                                                                    │
│ ❱ 116 │   │   │   return func(*args, **kwargs)                                                                       │
│   117                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\diffusers\pipelines\flux\pipeline_flux.py:719 in __call__                     │
│                                                                                                                      │
│   718 │   │   │   │                                                                                                  │
│ ❱ 719 │   │   │   │   noise_pred = self.transformer(                                                                 │
│   720 │   │   │   │   │   hidden_states=latents,                                                                     │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\torch\nn\modules\module.py:1553 in _wrapped_call_impl                         │
│                                                                                                                      │
│   1552 │   │   else:                                                                                                 │
│ ❱ 1553 │   │   │   return self._call_impl(*args, **kwargs)                                                           │
│   1554                                                                                                               │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\torch\nn\modules\module.py:1562 in _call_impl                                 │
│                                                                                                                      │
│   1561 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                                       │
│ ❱ 1562 │   │   │   return forward_call(*args, **kwargs)                                                              │
│   1563                                                                                                               │
│                                                                                                                      │
│                                               ... 4 frames hidden ...                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\optimum\quanto\nn\qlinear.py:45 in forward                                    │
│                                                                                                                      │
│   44 │   def forward(self, input: torch.Tensor) -> torch.Tensor:                                                     │
│ ❱ 45 │   │   return torch.nn.functional.linear(input, self.qweight, bias=self.bias)                                  │
│   46                                                                                                                 │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\optimum\quanto\tensor\qtensor.py:90 in __torch_function__                     │
│                                                                                                                      │
│   89 │   │   if qfunc is not None:                                                                                   │
│ ❱ 90 │   │   │   return qfunc(*args, **kwargs)                                                                       │
│   91 │   │   # Defer to dispatcher to look instead for QTensor subclasses operations                                 │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\optimum\quanto\tensor\qtensor_func.py:152 in linear                           │
│                                                                                                                      │
│   151 def linear(func, input, other, bias=None):                                                                     │
│ ❱ 152 │   return QTensorLinear.apply(input, other, bias)                                                             │
│   153                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\torch\autograd\function.py:574 in apply                                       │
│                                                                                                                      │
│   573 │   │   │   args = _functorch.utils.unwrap_dead_wrappers(args)                                                 │
│ ❱ 574 │   │   │   return super().apply(*args, **kwargs)  # type: ignore[misc]                                        │
│   575                                                                                                                │
│                                                                                                                      │
│ c:\ai\automatic\venv\Lib\site-packages\optimum\quanto\tensor\qtensor_func.py:128 in forward                          │
│                                                                                                                      │
│   127 │   │   else:                                                                                                  │
│ ❱ 128 │   │   │   output = torch.matmul(input, other.t())                                                            │
│   129 │   │   if bias is not None:                                                                                   │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: struct c10::Half != struct c10::BFloat16
20:05:18-810519 INFO     Processed: images=0 time=6.43 its=0.00 memory={'ram': {'used': 15.07, 'total': 63.92}, 'gpu':
                         {'used': 7.68, 'total': 15.99}, 'retries': 0, 'oom': 0}
vladmandic commented 2 months ago

ok, this seems near the finish line

20:03:58-111204 ERROR Loading FLUX: Failed to cast transformer to torch.float16, set dtype to torch.bfloat16

i need to figure out fp16 vs bf16 internally, for now set settings -> compute -> device precision > bf16

I have no idea what comment they've deleted, I haven't made any further comment

i think it was a previous comment related to when user posted a malicious download link that i've removed and flagged

SAC020 commented 2 months ago

ok, this seems near the finish line

20:03:58-111204 ERROR Loading FLUX: Failed to cast transformer to torch.float16, set dtype to torch.bfloat16

i need to figure out fp16 vs bf16 internally, for now set settings -> compute -> device precision > bf16

Thank you, it works now and I can run generations. I just need to remember to switch back to fp16 when using normal SDXL models.

Is 1.66s/it normal on a 4080? It is not running out of VRAM. Seems significantly slower than your benchmark. Should I open a separate issue?

07:22:21-429561 DEBUG    Sampler default FlowMatchEulerDiscreteScheduler: {'num_train_timesteps': 1000, 'shift': 3.0,
                         'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256,
                         'max_image_seq_len': 4096}
07:22:21-454494 DEBUG    Torch generator: device=cuda seeds=[3330659293]
07:22:21-455490 DEBUG    Diffuser pipeline: FluxPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1 set={'prompt':
                         1, 'guidance_scale': 6, 'num_inference_steps': 30, 'output_type': 'latent', 'width': 1024,
                         'height': 1024, 'parser': 'Fixed attention'}
Progress  2.64s/it ██▎                                  7% 2/30 00:05 01:13 Base07:22:29-043639 DEBUG    VAE load: type=taesd model=models\TAESD\taef1_decoder.pth
Progress  1.66s/it █████████████████████████████████ 100% 30/30 00:49 00:00 Base
07:23:19-089701 INFO     Processed: images=1 time=57.68 its=0.52 memory={'ram': {'used': 12.33, 'total': 63.92}, 'gpu':
                         {'used': 2.4, 'total': 15.99}, 'retries': 0, 'oom': 0}
vladmandic commented 2 months ago

its within expected range, my numbers were for rtx4090.

I just need to remember to switch back to fp16 when using normal SDXL models.

why? what's wrong with just leaving bf16?

SAC020 commented 2 months ago

why? what's wrong with just leaving bf16?

For inference I don't know the difference / could not compare results, but training with bf16 yielded lower quality / worse than training with fp16.