vladmandic / automatic

SD.Next: Advanced Implementation Generative Image Models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.78k stars 434 forks source link

[Issue]: Textual Inversion models do not load and don't seem to be parsed by the negative prompt form (and tokenizer?) #3177

Closed mart-hill closed 6 months ago

mart-hill commented 6 months ago

Issue Description

When loading SD.Next in the newest f5283c37 commit there's an error appearing (excerpt from the log): 2024-05-29 04:19:42,695 | sd | ERROR | textual_inversion | Model not loaded

The embeddings from the negative prompt aren't being parsed/used while generating an image, I didn't test it with positive prompt yet.

I hope it's not an extension issue, embeddings were parsed correctly when I was using previous master commit.

Version Platform Description

03:58:09-613176 INFO     Logger: file="sdnext.log" level=INFO size=508186 mode=append
03:58:09-615676 INFO     Python 3.10.9 on Windows
03:58:10-856676 INFO     Version: app=sd.next updated=2024-05-28 hash=f5283c37 branch=master
                         url=https://github.com/vladmandic/automatic/tree/master
03:58:12-952677 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 85 Stepping 4, GenuineIntel system=Windows
                         release=Windows-10-10.0.19045-SP0 python=3.10.9
03:58:12-978176 INFO     nVidia CUDA toolkit detected: nvidia-smi present

04:00:31-771677 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler',
                         'sdnext-modernui', 'stable-diffusion-webui-rembg', 'a1111-sd-webui-tagcomplete', 'adetailer',
                         'advanced_euler_sampler_extension', 'CFG-Schedule-for-Automatic1111-SD', 'embedding-inspector', 'model-keyword',
                         'OneButtonPrompt', 'openOutpaint-webUI-extension', 'openpose-editor', 'prompt-fusion-extension', 'PXL8',
                         'sd-dynamic-prompts', 'sd-extension-nudenet', 'sd-infinity-grid-generator-script', 'sd-model-preview-xd',
                         'sd-pixel', 'sd-webui-ar', 'sd-webui-aspect-ratio-helper', 'sd-webui-check-tensors', 'sd-webui-color-enhance',
                         'sd-webui-cutoff', 'sd-webui-freeu', 'sd-webui-infinite-image-browsing', 'sd-webui-lora-block-weight',
                         'sd-webui-negpip', 'sd-webui-openpose-editor', 'sd-webui-pixelart', 'sd-webui-prompt-all-in-one',
                         'sd-webui-stablesr', 'sd-webui-supermerger', 'sdweb-merge-block-weighted-gui', 'sd_webui_SAG',
                         'stable-diffusion-prompt-pai', 'stable-diffusion-webui-anti-burn', 'stable-diffusion-webui-cafe-aesthetic',
                         'stable-diffusion-webui-model-toolkit', 'stable-diffusion-webui-pixelization', 'stable-diffusion-webui-two-shot',
                         'ultimate-upscale-for-automatic1111', 'Umi-AI']
04:00:31-891177 INFO     Command line args: []
04:00:58-284676 INFO     Load packages: {'torch': '2.2.1+cu121', 'diffusers': '0.28.0', 'gradio': '3.43.2'}
04:01:03-558676 INFO     VRAM: Detected=24.0 GB Optimization=none
04:01:03-564176 INFO     Engine: backend=Backend.ORIGINAL compute=cuda device=cuda attention="Scaled-Dot-Product" mode=no_grad
04:01:03-779676 INFO     Device: device=NVIDIA GeForce RTX 3090 n=1 arch=sm_90 cap=(8, 6) cuda=12.1 cudnn=8801 driver=555.85
04:01:17-841177 INFO     Available VAEs: path="models\VAE" items=85
04:01:17-848677 INFO     Disabled extensions: ['sdnext-modernui', 'ABG_extension', 'SD-latent-mirroring', 'TokenMixer',
                         'multidiffusion-upscaler-for-automatic1111', 'sd-face-editor', 'sd-webui-additional-networks',
                         'sd-webui-animatediff', 'sd-webui-bayesian-merger', 'sd-webui-neutral-prompt', 'sdweb-merge-board',
                         'stable-diffusion-webui-Prompt_Generator', 'stable-diffusion-webui-aesthetic-gradients',
                         'stable-diffusion-webui-embedding-merge', 'stable-diffusion-webui-text2prompt',
                         'stable-diffusion-webui-visualize-cross-attention-extension', 'tagger', 'weight_gradient']
04:01:28-363679 INFO     Available models: path="models\Stable-diffusion" items=1654 time=10.51
04:01:30-290175 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py' [2;36m04:01:30-258178[0m[2;36m [0m[34mINFO
                         [0m LoRA networks: [33mavailable[0m=[1;36m2562[0m [33mfolders[0m=[1;36m46[0m
04:01:34-994676 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite file:
                         extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
04:01:38-175176 INFO     Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized. version: 24.5.1, num
                         models: 38
04:01:40-796176 INFO     Extension: script='extensions\sd-webui-cutoff\scripts\cutoff.py' [Cutoff] failed to load
                         `sgm.modules.GeneralConditioner`
04:01:40-871676 INFO     Extension: script='extensions\sd-webui-freeu\scripts\freeu.py' [sd-webui-freeu] Controlnet support: *disabled*
04:01:41-432677 INFO     Extension: script='extensions\sd-webui-prompt-all-in-one\scripts\on_app_started.py' sd-webui-prompt-all-in-one
                         background API service started successfully.
04:01:43-992675 ERROR    UI theme invalid: type=Standard theme="Default" available=['amethyst-nightfall', 'black-gray', 'black-orange',
                         'black-teal', 'emerald-paradise', 'invoked', 'light-teal', 'midnight-barbie', 'orchid-dreams', 'simple-dark',
                         'simple-light', 'timeless-beige']
04:01:43-995676 INFO     UI theme: type=Standard name="black-teal"
04:02:57-448677 INFO     Local URL: http://127.0.0.1:7860/
04:03:02-704676 INFO     [AgentScheduler] Task queue is empty
04:03:02-706676 INFO     [AgentScheduler] Registering APIs
IIB Database file has been successfully backed up to the backup folder.
04:03:05-849177 INFO     Select: model="model [0721223551]"
Loading model: X:\AI\automatic\models\Stable-diffusion\model.safetensors 
04:03:07-272176 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16
                         context=inference_mode fp16=True bf16=None optimization=Scaled-Dot-Product
04:03:14-976677 INFO     LDM: LatentDiffusion: mode=eps
04:03:14-980177 INFO     LDM: DiffusionWrapper params=859.52M
04:03:14-982677 INFO     Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline
                         file="X:\AI\automatic\models\Stable-diffusion\model.safetensors"
                         size=2034MB
Loading model: X:\AI\automatic\models\VAE\difconsistencyRAWVAE_v1LOW.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 334.6/334.6 MB 0:00:00

Relevant log output

2024-05-29 04:03:16,620 | sd | INFO | sd_hijack | Cross-attention: optimization=Scaled-Dot-Product
2024-05-29 04:03:16,628 | sd | ERROR | textual_inversion | Model not loaded 
2024-05-29 04:03:27,258 | sd | INFO | textual_inversion | Load embeddings: loaded=0 skipped=2273 time=10.63

but also, after invoking a chosen SD1.5 model loading:

2024-05-29 04:22:51,938 | sd | ERROR | textual_inversion | Model not loaded
2024-05-29 04:23:03,349 | sd | ERROR | errors | Executing callback: X:\AI\automatic\extensions\a1111-sd-webui-tagcomplete\scripts\tag_autocomplete_helper.py model_loaded_callback: ValueError

Backend

Original

Branch

Master

Model

SD 1.5

Acknowledgements

brknsoul commented 6 months ago

You don't have a checkpoint model installed. Open Networks (button near Generate, or second button down in the sidebar on the right), then Models, then References, then choose a model to download.

Or download a model from Civitai and place it into models\Stable-diffusion.

mart-hill commented 6 months ago

I have a lot of models, and I had one loaded, I named it model.safetensors, I can generate an image/images, the UI just doesn't seem to parse embeddings, despite the a1111-sd-webui-tagcomplete extension clearly 'seeing' them . The log also shows that SD.Next 'sees' the embeddings, but they seem to be totally 'ignored' during startup (for SD1.5 model, I didn't test anything else):

2024-05-29 04:03:16,628 | sd | ERROR | textual_inversion | Model not loaded 
2024-05-29 04:03:27,258 | sd | INFO | textual_inversion | Load embeddings: loaded=0 skipped=2273 time=10.63

The models and embeddings are in the 'junction' type of folders (symlinks).

vladmandic commented 6 months ago

I have an idea, I'll take a look tomorrow

mart-hill commented 6 months ago

Thank you! I did --reinstall (torch 2.3.0 appeared), but embeddings are still being 'ignored' in the log. 🙂 They are for SD1.5 (most of them), SD2.0 (a few), and SDXL (also just a few).

vladmandic commented 6 months ago

this should be fixed now.

mart-hill commented 6 months ago

With the latest 032018abf0daa5ecf43d2da856ecfaf93010d524 commit the UI doesn't seem to parse the TI embeddings in the prompt fields (by adding the TI's vector count to the amount of tokens) but they are being used in image generation - at least in diffusers pipeline. I'm yet to test the original pipeline, again. 🙂

vladmandic commented 6 months ago

you're talking about ui token counter? token counter is just a quick ui indicator, it has to take shortcuts in order to work quickly in the ui. it doesn't take wildcards into account, it doesn't take styles into account, etc.

mart-hill commented 6 months ago

Yup, and that's how I caught that bug earlier, and also by the fact, that the recreated image was totally different from before, by not taking the TI into the account (of course with the same pipeline and model).

mart-hill commented 6 months ago

In the Original pipeline, parsing of the embeddings works fine, the counter takes the vector amount into consideration - just tested.