vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.28k stars 377 forks source link

Loras not working and also appear to not being loaded. #3305

Open DalekSkaro1 opened 3 days ago

DalekSkaro1 commented 3 days ago

Discussed in https://github.com/vladmandic/automatic/discussions/3303

Originally posted by **DalekSkaro1** June 29, 2024 When trying to add a lora to the prompt the image generated does not seem to be affected by the lora, even with the same loras that I had used previously and that had then woked. When mentioning that the lora is loading on the command line, it does not appear to load ending with "Loading model: C:\AI2\models\Lora\add-detail-XL.safetensors -------------------------- 0.0/218 MB -:--:--" and "22:05:50-606574 INFO LoRA apply: ['add-detail-XL'] patch=0.03 load=1.07" appearing in the prompt. This problem appears to have begun when the Olive options first appeared for me (I hadn't used SD in some time). I have tryed unistalling all of the automatic folder and re-installing it again. I am also sure that the loras aren't being applied because putting a weight of 500 does not generate a picture of noise but the same type of picture as if there wasn't any lora.

Here is the full output of the command prompt when running SD.next:

Using VENV: C:\AI2\venv
11:13:22-323316 INFO     Starting SD.Next
11:13:22-348006 INFO     Logger: file="C:\AI2\sdnext.log" level=INFO size=461538 mode=append
11:13:22-351296 INFO     Python version=3.11.6 platform=Windows bin="C:\AI2\venv\Scripts\python.exe" venv="C:\AI2\venv"
11:13:24-661502 INFO     Version: app=sd.next updated=2024-06-24 hash=94f6f0db branch=master
                         url=https://github.com/vladmandic/automatic/tree/master ui=main
11:13:27-978428 INFO     Latest published version: 081c19fc122c6c8e60fcddfc37917e8107f65290 2024-07-01T08:20:42Z
11:13:28-014304 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 158 Stepping 10, GenuineIntel system=Windows
                         release=Windows-10-10.0.19045-SP0 python=3.11.6
11:13:28-019292 INFO     HF cache folder: C:\Users\felip\.cache\huggingface\hub
11:13:28-022309 INFO     nVidia CUDA toolkit detected: nvidia-smi present
11:13:31-515676 INFO     Verifying requirements
11:13:31-623275 INFO     Verifying packages
11:13:32-998753 INFO     Extensions: disabled=['lora-prompt-tool']
11:13:33-001726 INFO     Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
                         extensions-builtin
11:13:33-122193 INFO     Extensions: enabled=[] extensions
11:13:33-134153 INFO     Startup: quick launch
11:13:33-175174 INFO     Extensions: disabled=['lora-prompt-tool']
11:13:33-182299 INFO     Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
                         extensions-builtin
11:13:33-188283 INFO     Extensions: enabled=[] extensions
11:13:33-302966 INFO     Command line args: []
11:14:49-534552 INFO     Load packages: {'torch': '2.3.1+cu121', 'diffusers': '0.29.1', 'gradio': '3.43.2'}
11:14:54-622084 INFO     VRAM: Detected=4.0 GB Optimization=lowvram
11:14:54-631087 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product"
                         mode=no_grad
11:14:54-919886 INFO     Device: device=NVIDIA GeForce GTX 1050 Ti n=1 arch=sm_90 cap=(6, 1) cuda=12.1 cudnn=8907
                         driver=536.23
11:15:05-046182 INFO     Available VAEs: path="models\VAE" items=2
11:15:05-050940 INFO     Disabled extensions: ['sdnext-modernui', 'lora-prompt-tool']
11:15:05-152167 INFO     Available models: path="models\Stable-diffusion" items=39 time=0.08
11:15:10-090279 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py'
                         11:15:10-073752 INFO     LoRA networks: available=46
                         folders=2
11:15:13-263267 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py'
                         Using sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
11:15:13-435594 INFO     UI theme: type=Standard name="black-teal"
11:15:17-353507 INFO     Local URL: http://127.0.0.1:7860/
11:15:17-606147 INFO     [AgentScheduler] Task queue is empty
11:15:17-609140 INFO     [AgentScheduler] Registering APIs
11:15:18-170644 INFO     Torch override dtype: no-half set
11:15:18-172643 INFO     Torch override VAE dtype: no-half set
11:15:18-174636 INFO     Setting Torch parameters: device=cuda dtype=torch.float32 vae=torch.float32 unet=torch.float32
                         context=no_grad fp16=True bf16=None optimization=Scaled-Dot-Product
11:15:18-181601 INFO     Select: model="himawarimix_xlV13 [d9d661baad]"
11:15:18-185590 INFO     Autodetect: model="Stable Diffusion XL" class=StableDiffusionXLPipeline
                         file="C:\AI2\models\Stable-diffusion\himawarimix_xlV13.safetensors" size=6776MB
Fetching 17 files: 100%|██████████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 622.83it/s]
Loading pipeline components... 100% ------------------------------------------------ 7/7  [ 0:05:23 < 0:00:00 , 0 C/s ]
11:21:29-995203 INFO     Load embeddings: loaded=0 skipped=0 time=0.14
11:21:31-352165 INFO     Load model: time=372.83 load=371.67 embeddings=0.16 move=0.99 native=1024 {'ram': {'used':
                         11.66, 'total': 15.93}, 'gpu': {'used': 0.72, 'total': 4.0}, 'retries': 0, 'oom': 0}
11:21:37-356236 INFO     Startup time: 477.97 torch=59.90 gradio=10.40 diffusers=5.78 libraries=14.63 ldm=0.07
                         samplers=0.78 extensions=3.51 models=0.10 face-restore=4.63 upscalers=0.08 networks=0.06
                         ui-en=0.38 ui-txt2img=0.09 ui-img2img=0.08 ui-control=0.12 ui-models=0.56 ui-settings=0.43
                         ui-extensions=1.42 ui-defaults=0.06 launch=0.74 api=0.13 app-started=0.20 checkpoint=373.69
11:26:04-605341 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)
                         AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36 Edg/126.0.0.0
11:26:05-742333 INFO     MOTD: N/A
Loading model: C:\AI2\models\Lora\sakuemonq.safetensors ---------------------------------------- 0.0/114.4 MB -:--:--
11:28:16-402037 INFO     LoRA apply: ['sakuemonq'] patch=0.00 load=8.08
11:28:16-598730 INFO     Base: class=StableDiffusionXLPipeline
Progress 21.78s/it █████████████████████████████████ 100% 23/23 08:20 00:00 Base
11:39:09-235606 INFO     Save: image="outputs\text\00049-himawarimix_xlV13-best quality score 9 score 8 up score.jpg"
                         type=JPEG resolution=784x784 size=129694
11:39:11-137114 INFO     Processed: images=1 time=662.85 its=0.03 memory={'ram': {'used': 11.65, 'total': 15.93},
                         'gpu': {'used': 1.07, 'total': 4.0}, 'retries': 0, 'oom': 0}
11:49:31-910492 INFO     Base: class=StableDiffusionXLPipeline
Progress 21.05s/it █████████████████████████████████ 100% 23/23 08:04 00:00 Base
12:00:02-511461 INFO     Save: image="outputs\text\00050-himawarimix_xlV13-best quality score 9 score 8 up score.jpg"
                         type=JPEG resolution=784x784 size=128029
12:00:04-435616 INFO     Processed: images=1 time=632.67 its=0.04 memory={'ram': {'used': 10.52, 'total': 15.93},
                         'gpu': {'used': 1.07, 'total': 4.0}, 'retries': 0, 'oom': 0}
12:07:39-857789 INFO     LoRA apply: ['sakuemonq'] patch=0.08 load=0.08
12:07:40-502259 INFO     Base: class=StableDiffusionXLPipeline
Progress 21.54s/it █████████████████████████████████ 100% 23/23 08:15 00:00 Base
12:18:52-371273 INFO     Save: image="outputs\text\00051-himawarimix_xlV13-best quality score 9 score 8 up score.jpg"
                         type=JPEG resolution=784x784 size=129696
12:18:53-285038 INFO     Processed: images=1 time=673.69 its=0.03 memory={'ram': {'used': 11.21, 'total': 15.93},
                         'gpu': {'used': 1.07, 'total': 4.0}, 'retries': 0, 'oom': 0}

I am using a intel i5-9400F and a NVIDIA GTX 1050 Ti, image generation and Loras were working until some months ago. I am using the HimawariMix SDXL 1.0 checkpoint with the sakuemonq pixel art lora, the tests made are with the lora at 1.0 weight, no lora and the same lora at 30.0 weight, with no difference between the 1.0 weight and 30.0 weightand the same number seed.

vladmandic commented 3 days ago

i cannot reproduce the problem - i've downloaded the lora you've indicated and tested at different strengths and results are exactly as expected:

pixlated

DalekSkaro1 commented 3 days ago

I think this may have something to do with PyTorch not working correctly, I have downloaded a1111 and it works fine, but in the downloading process it seems to have re-downloaded PyTorch, so I will try to uninstall python and re-install it. But as you can see from the command prompt print, SD.next finds the Lora, it just loads 0.0/114.4 MB and so there is no effect in the image processing, starngely enough, SD.next is still able to generate images even with a possible error in PyTorch.

Here are some samplers of images generated with HimawariMix SDXL 1.0 checkpoint with the sakuemonq pixel art lora at my PC:

Prompt: best quality, score_9, score_8_up, score_8, score_7_up, chameleon, source_anime, solo,

Steps: 23, Seed: 2110750869

No Lora: 00050-himawarimix_xlV13-best quality score 9 score 8 up score

Lora at 1.0: 00049-himawarimix_xlV13-best quality score 9 score 8 up score

Lora at 30.0: 00051-himawarimix_xlV13-best quality score 9 score 8 up score

vladmandic commented 3 days ago

run with --debug command line param so log captures more details.

DalekSkaro1 commented 3 days ago

Here you go: I have again run Prompt: best quality, score_9, score_8_up, score_8, score_7_up, chameleon, source_anime, solo, and Prompt: best quality, score_9, score_8_up, score_8, score_7_up, chameleon, source_anime, solo,

Steps: 23, Seed: 2110750869

with the same images returning as previously.

Using VENV: C:\AI2\venv
03:28:36-468561 INFO     Starting SD.Next
03:28:36-475573 INFO     Logger: file="C:\AI2\sdnext.log" level=DEBUG size=65 mode=create
03:28:36-478595 INFO     Python version=3.11.6 platform=Windows bin="C:\AI2\venv\Scripts\python.exe" venv="C:\AI2\venv"
03:28:38-259171 INFO     Version: app=sd.next updated=2024-06-24 hash=94f6f0db branch=master
                         url=https://github.com/vladmandic/automatic/tree/master ui=main
03:28:41-717535 DEBUG    Branch sync failed: sdnext=master ui=main
03:28:43-567737 INFO     Latest published version: 081c19fc122c6c8e60fcddfc37917e8107f65290 2024-07-01T08:20:42Z
03:28:43-625588 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 158 Stepping 10, GenuineIntel system=Windows
                         release=Windows-10-10.0.19045-SP0 python=3.11.6
03:28:43-632569 DEBUG    Setting environment tuning
03:28:43-634564 INFO     HF cache folder: C:\Users\felip\.cache\huggingface\hub
03:28:43-635561 DEBUG    Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512"
03:28:43-637556 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
03:28:43-640548 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
03:28:43-642865 INFO     nVidia CUDA toolkit detected: nvidia-smi present
03:28:45-388732 INFO     Verifying requirements
03:28:45-442917 INFO     Verifying packages
03:28:45-856871 DEBUG    Repository update time: Mon Jun 24 16:35:47 2024
03:28:45-858870 INFO     Startup: standard
03:28:45-859868 INFO     Verifying submodules
03:28:53-676362 DEBUG    Submodule: extensions-builtin/sd-extension-chainner / main
03:28:54-152229 DEBUG    Submodule: extensions-builtin/sd-extension-system-info / main
03:28:54-430593 DEBUG    Submodule: extensions-builtin/sd-webui-agent-scheduler / main
03:28:54-820557 DEBUG    Git detached head detected: folder="extensions-builtin/sdnext-modernui" reattach=main
03:28:54-825544 DEBUG    Submodule: extensions-builtin/sdnext-modernui / main
03:28:55-099569 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
03:28:55-448728 DEBUG    Submodule: modules/k-diffusion / master
03:28:55-840950 DEBUG    Git detached head detected: folder="wiki" reattach=master
03:28:55-844934 DEBUG    Submodule: wiki / master
03:28:55-984508 DEBUG    Register paths
03:28:56-089211 DEBUG    Installed packages: 184
03:28:56-091206 DEBUG    Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
03:28:56-364294 DEBUG    Running extension installer: C:\AI2\extensions-builtin\sd-webui-agent-scheduler\install.py
03:28:56-949495 DEBUG    Running extension installer: C:\AI2\extensions-builtin\stable-diffusion-webui-rembg\install.py
03:28:57-393072 DEBUG    Extensions all: []
03:28:57-396065 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
03:28:57-400054 INFO     Verifying requirements
03:28:57-401051 DEBUG    Setup complete without errors: 1719901737
03:28:57-626994 DEBUG    Extension preload: {'extensions-builtin': 0.07, 'extensions': 0.04}
03:28:57-635304 DEBUG    Starting module: <module 'webui' from 'C:\\AI2\\webui.py'>
03:28:57-637298 INFO     Command line args: ['--debug'] debug=True
03:28:57-639292 DEBUG    Env flags: []
03:29:40-863881 INFO     Load packages: {'torch': '2.3.1+cu121', 'diffusers': '0.29.1', 'gradio': '3.43.2'}
03:29:44-934634 INFO     VRAM: Detected=4.0 GB Optimization=lowvram
03:29:44-939383 DEBUG    Read: file="config.json" json=44 bytes=1821 time=0.001
03:29:44-943373 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product"
                         mode=no_grad
03:29:45-235348 INFO     Device: device=NVIDIA GeForce GTX 1050 Ti n=1 arch=sm_90 cap=(6, 1) cuda=12.1 cudnn=8907
                         driver=536.23
03:29:45-248600 DEBUG    Read: file="html\reference.json" json=43 bytes=24978 time=0.007
03:29:49-098856 DEBUG    ONNX: version=1.18.1 provider=CUDAExecutionProvider, available=['AzureExecutionProvider',
                         'CPUExecutionProvider']
03:29:49-445163 DEBUG    Importing LDM
03:29:49-495370 DEBUG    Entering start sequence
03:29:49-500381 DEBUG    Initializing
03:29:49-667770 INFO     Available VAEs: path="models\VAE" items=2
03:29:49-673151 DEBUG    Available UNets: path="models\UNET" items=0
03:29:49-675982 INFO     Disabled extensions: ['sdnext-modernui', 'lora-prompt-tool']
03:29:49-680066 DEBUG    Read: file="cache.json" json=2 bytes=400 time=0.000
03:29:49-715594 DEBUG    Read: file="metadata.json" json=88 bytes=186621 time=0.032
03:29:49-722532 DEBUG    Scanning diffusers cache: folder=models\Diffusers items=0 time=0.00
03:29:49-725427 INFO     Available models: path="models\Stable-diffusion" items=38 time=0.05
03:29:51-702540 DEBUG    Load extensions
03:29:51-926442 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py'
                         03:29:51-897194 INFO     LoRA networks: available=44
                         folders=2
03:29:53-752712 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py'
                         Using sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
03:29:53-790922 DEBUG    Extensions init time: 2.09 Lora=0.06 sd-extension-chainner=0.20 sd-webui-agent-scheduler=1.62
03:29:53-835852 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.007
03:29:53-849264 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719
                         time=0.009
03:29:53-854824 DEBUG    chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=0
03:29:53-861806 DEBUG    Load upscalers: total=52 downloaded=3 user=0 time=0.07 ['None', 'Lanczos', 'Nearest',
                         'ChaiNNer', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
03:29:53-910622 DEBUG    Load styles: folder="models\styles" items=288 time=0.04
03:29:53-919659 DEBUG    Creating UI
03:29:53-922653 DEBUG    UI themes available: type=Standard themes=12
03:29:53-924647 INFO     UI theme: type=Standard name="black-teal"
03:29:53-933648 DEBUG    UI theme: css="C:\AI2\javascript\black-teal.css" base="sdnext.css" user="None"
03:29:53-938656 DEBUG    UI initialize: txt2img
03:29:54-004981 DEBUG    Networks: page='model' items=80 subfolders=2 tab=txt2img folders=['models\\Stable-diffusion',
                         'models\\Diffusers', 'models\\Reference'] list=0.04 thumb=0.01 desc=0.01 info=0.00 workers=4
                         sort=Default
03:29:54-017925 DEBUG    Networks: page='lora' items=44 subfolders=0 tab=txt2img folders=['models\\Lora',
                         'models\\LyCORIS'] list=0.03 thumb=0.01 desc=0.02 info=0.00 workers=4 sort=Default
03:29:54-070856 DEBUG    Networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles', 'html']
                         list=0.04 thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
03:29:54-080767 DEBUG    Networks: page='embedding' items=0 subfolders=0 tab=txt2img folders=['models\\embeddings']
                         list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4 sort=Default
03:29:54-087634 DEBUG    Networks: page='vae' items=2 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.02
                         thumb=0.01 desc=0.00 info=0.00 workers=4 sort=Default
03:29:54-368919 DEBUG    UI initialize: img2img
03:29:54-500629 DEBUG    UI initialize: control models=models\control
03:29:55-262678 DEBUG    Read: file="ui-config.json" json=0 bytes=2 time=0.017
03:29:55-448476 DEBUG    UI themes available: type=Standard themes=12
03:29:56-672464 DEBUG    Extension list: processed=368 installed=7 enabled=5 disabled=2 visible=368 hidden=0
03:29:56-817050 DEBUG    Root paths: ['C:\\AI2']
03:29:57-272804 INFO     Local URL: http://127.0.0.1:7860/
03:29:57-275523 DEBUG    Gradio functions: registered=1704
03:29:57-292741 DEBUG    FastAPI middleware: ['Middleware', 'Middleware']
03:29:57-298579 DEBUG    Creating API
03:29:57-492564 INFO     [AgentScheduler] Task queue is empty
03:29:57-495484 INFO     [AgentScheduler] Registering APIs
03:29:57-615164 DEBUG    Scripts setup: ['IP Adapters:0.062', 'AnimateDiff:0.009', 'X/Y/Z Grid:0.016', 'Face:0.014',
                         'Image-to-Video:0.007', 'Stable Video Diffusion:0.006']
03:29:57-620235 DEBUG    Model metadata: file="metadata.json" no changes
03:29:57-622260 DEBUG    Torch mode: deterministic=False
03:29:58-240876 INFO     Torch override dtype: no-half set
03:29:58-243209 INFO     Torch override VAE dtype: no-half set
03:29:58-244908 DEBUG    Desired Torch parameters: dtype=FP16 no-half=True no-half-vae=True upscast=False
03:29:58-248898 INFO     Setting Torch parameters: device=cuda dtype=torch.float32 vae=torch.float32 unet=torch.float32
                         context=no_grad fp16=True bf16=None optimization=Scaled-Dot-Product
03:29:58-254305 DEBUG    Model requested: fn=<lambda>
03:29:58-257297 INFO     Select: model="himawarimix_xlV13 [d9d661baad]"
03:29:58-259291 DEBUG    Load model: existing=False target=C:\AI2\models\Stable-diffusion\himawarimix_xlV13.safetensors
                         info=None
03:29:58-263299 DEBUG    Diffusers loading: path="C:\AI2\models\Stable-diffusion\himawarimix_xlV13.safetensors"
03:29:58-266296 INFO     Autodetect: model="Stable Diffusion XL" class=StableDiffusionXLPipeline
                         file="C:\AI2\models\Stable-diffusion\himawarimix_xlV13.safetensors" size=6776MB
Fetching 17 files: 100%|████████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 17037.79it/s]
Loading pipeline components... 100% ------------------------------------------------ 7/7  [ 0:09:18 < 0:00:00 , 0 C/s ]
03:40:17-621778 DEBUG    Setting model: pipeline=StableDiffusionXLPipeline config={'low_cpu_mem_usage': True,
                         'torch_dtype': torch.float32, 'load_connected_pipeline': True, 'extract_ema': False, 'config':
                         None, 'use_safetensors': True, 'cache_dir': 'C:\\Users\\felip\\.cache\\huggingface\\hub'}
03:40:18-860228 INFO     Load embeddings: loaded=0 skipped=0 time=0.23
03:40:18-865217 DEBUG    Setting model: enable VAE slicing
03:40:18-882075 DEBUG    Setting model: enable VAE tiling
03:40:18-897522 DEBUG    Setting model: enable sequential CPU offload
03:40:20-117479 DEBUG    Read: file="C:\AI2\configs\sdxl\vae\config.json" json=15 bytes=674 time=0.127
03:40:20-467507 DEBUG    GC: utilization={'gpu': 18, 'ram': 67, 'threshold': 80} gc={'collected': 127, 'saved': 0.0}
                         beofre={'gpu': 0.72, 'ram': 10.69} after={'gpu': 0.72, 'ram': 10.69, 'retries': 0, 'oom': 0}
                         device=cuda fn=load_diffuser time=0.31
03:40:20-497654 INFO     Load model: time=621.87 load=620.36 embeddings=0.24 move=1.23 native=1024 {'ram': {'used':
                         10.69, 'total': 15.93}, 'gpu': {'used': 0.72, 'total': 4.0}, 'retries': 0, 'oom': 0}
03:40:20-600394 DEBUG    Script callback init time: system-info.py:app_started=0.06 task_scheduler.py:app_started=0.14
03:40:20-611954 INFO     Startup time: 682.80 torch=32.48 gradio=6.73 diffusers=3.90 libraries=8.58 ldm=0.05
                         samplers=0.17 extensions=2.09 models=0.05 face-restore=1.97 upscalers=0.08 networks=0.05
                         ui-en=0.32 ui-txt2img=0.25 ui-img2img=0.08 ui-control=0.12 ui-models=0.46 ui-settings=0.36
                         ui-extensions=1.10 ui-defaults=0.08 launch=0.53 api=0.12 app-started=0.20 checkpoint=622.93
03:40:20-614205 DEBUG    Save: file="config.json" json=44 bytes=1760 time=0.037
03:41:59-713147 DEBUG    Server: alive=True jobs=1 requests=1 uptime=736 memory=10.69/15.93 backend=Backend.DIFFUSERS
                         state=idle
03:43:59-787645 DEBUG    Server: alive=True jobs=1 requests=1 uptime=856 memory=10.66/15.93 backend=Backend.DIFFUSERS
                         state=idle
03:45:59-854804 DEBUG    Server: alive=True jobs=1 requests=1 uptime=976 memory=10.6/15.93 backend=Backend.DIFFUSERS
                         state=idle
03:46:51-011167 DEBUG    UI themes available: type=Standard themes=12
03:46:51-150756 INFO     MOTD: N/A
03:46:51-437068 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)
                         AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36 Edg/126.0.0.0
03:47:59-920071 DEBUG    Server: alive=True jobs=1 requests=34 uptime=1096 memory=10.26/15.93 backend=Backend.DIFFUSERS
                         state=idle
03:50:00-039230 DEBUG    Server: alive=True jobs=1 requests=64 uptime=1217 memory=10.13/15.93 backend=Backend.DIFFUSERS
                         state=idle
Loading model: C:\AI2\models\Lora\sakuemonq.safetensors ---------------------------------------- 0.0/114.4 MB -:--:--
03:50:29-033435 INFO     LoRA apply: ['sakuemonq'] patch=0.00 load=10.55
03:50:29-239883 INFO     Base: class=StableDiffusionXLPipeline
03:52:00-435221 DEBUG    Server: alive=True jobs=1 requests=251 uptime=1337 memory=9.22/15.93 backend=Backend.DIFFUSERS
                         state=idle
03:52:59-551844 DEBUG    Torch generator: device=cuda seeds=[2110750869]
03:52:59-571834 DEBUG    Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1
                         set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1,
                         1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds':
                         torch.Size([1, 1280]), 'guidance_scale': 6, 'num_inference_steps': 23, 'eta': 1.0,
                         'guidance_rescale': 0.7, 'denoising_end': None, 'output_type': 'latent', 'width': 784,
                         'height': 784, 'parser': 'Full parser'}
Progress ?it/s                                              0% 0/23 00:00 ? Base03:53:59-986724 DEBUG    Server: alive=True jobs=1 requests=373 uptime=1457 memory=9.47/15.93 backend=Backend.DIFFUSERS
                         state=idle
03:56:00-116370 DEBUG    Server: alive=True jobs=1 requests=495 uptime=1577 memory=9.37/15.93 backend=Backend.DIFFUSERS
                         state=idle
03:58:00-241206 DEBUG    Server: alive=True jobs=1 requests=618 uptime=1697 memory=10.12/15.93
                         backend=Backend.DIFFUSERS state=idle
Progress 401.45s/it █▎                                4% 1/23 06:41 2:27:11 Base03:59:59-920681 DEBUG    Server: alive=True jobs=1 requests=738 uptime=1816 memory=10.81/15.93
                         backend=Backend.DIFFUSERS state=idle
Progress 178.81s/it ██▋                               9% 2/23 07:04 1:02:34 Base04:00:16-549083 DEBUG    VAE load: type=approximate model=models\VAE-approx\model.pt
Progress  7.85s/it ██████████████████████▉            70% 16/23 08:53 00:54 Base04:01:59-785539 DEBUG    Server: alive=True jobs=1 requests=859 uptime=1936 memory=10.86/15.93
                         backend=Backend.DIFFUSERS state=idle
Progress 25.41s/it █████████████████████████████████ 100% 23/23 09:44 00:00 Base
04:03:32-710416 INFO     Save: image="outputs\text\00052-himawarimix_xlV13-best quality score 9 score 8 up score.jpg"
                         type=JPEG resolution=784x784 size=129694
04:03:33-068418 INFO     Processed: images=1 time=794.58 its=0.03 memory={'ram': {'used': 11.08, 'total': 15.93},
                         'gpu': {'used': 1.07, 'total': 4.0}, 'retries': 0, 'oom': 0}
04:03:59-698905 DEBUG    Server: alive=True jobs=1 requests=965 uptime=2056 memory=11.08/15.93
                         backend=Backend.DIFFUSERS state=idle
04:05:59-787932 DEBUG    Server: alive=True jobs=1 requests=967 uptime=2176 memory=11.08/15.93
                         backend=Backend.DIFFUSERS state=idle
04:07:59-888165 DEBUG    Server: alive=True jobs=1 requests=969 uptime=2296 memory=10.96/15.93
                         backend=Backend.DIFFUSERS state=idle
04:09:59-944403 DEBUG    Server: alive=True jobs=1 requests=971 uptime=2417 memory=10.93/15.93
                         backend=Backend.DIFFUSERS state=idle
04:10:42-740052 INFO     LoRA apply: ['sakuemonq'] patch=0.03 load=0.03
04:10:42-832987 INFO     Base: class=StableDiffusionXLPipeline
04:10:43-675334 DEBUG    Torch generator: device=cuda seeds=[2110750869]
04:10:43-705693 DEBUG    Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE batch=1/1x1
                         set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1,
                         1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds':
                         torch.Size([1, 1280]), 'guidance_scale': 6, 'num_inference_steps': 23, 'eta': 1.0,
                         'guidance_rescale': 0.7, 'denoising_end': None, 'output_type': 'latent', 'width': 784,
                         'height': 784, 'parser': 'Full parser'}
Progress  7.67s/it ███████████▊                        35% 8/23 01:13 01:55 Base04:11:59-509205 DEBUG    Server: alive=True jobs=1 requests=1069 uptime=2536 memory=10.68/15.93
                         backend=Backend.DIFFUSERS state=idle
Progress  7.97s/it █████████████████████████████████ 100% 23/23 03:03 00:00 Base
04:13:59-711051 DEBUG    Server: alive=True jobs=1 requests=1204 uptime=2656 memory=10.61/15.93
                         backend=Backend.DIFFUSERS state=idle
04:14:25-301486 INFO     Save: image="outputs\text\00053-himawarimix_xlV13-best quality score 9 score 8 up score.jpg"
                         type=JPEG resolution=784x784 size=129696
04:14:25-367575 INFO     Processed: images=1 time=223.13 its=0.10 memory={'ram': {'used': 10.55, 'total': 15.93},
                         'gpu': {'used': 1.07, 'total': 4.0}, 'retries': 0, 'oom': 0}
vladmandic commented 2 days ago

uploaded log looks clean as both model and lora get loaded without issues. i cannot reproduce using same lora, but here issue is clearly present.

DalekSkaro1 commented 2 days ago

Are there any options on the SYSTEM tab that could be messing with Lora effective loading or application? Something like model compile, or these Olive optimizations, I believe I have disabled all of these when the problem first begin, but there could still be something.

vladmandic commented 2 days ago

correct - those might interfere, but i don't see anything enabled in the log that shouldn't be right now. plus you're not using any of the compute backends that would require precompile like openvino or olive, you're using simple nvidia cuda with lowvram settings.

Velitha commented 1 day ago

I'm having this issue as well. Here's the log with --debug, and a copy paste of the console for good measure. sdnext.log console.txt

Velitha commented 1 day ago

Rolled back a few commits. VAE was a bit slow, but it finished the image properly with the same prompts and everything. sdnext.log console.txt