vladmandic / automatic

SD.Next: Advanced Implementation Generative Image Models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.76k stars 431 forks source link

[Issue]: SDXL refiner with diffusers stopped working #2623

Closed mraggi closed 11 months ago

mraggi commented 11 months ago

Issue Description

With latest git pull, SDXL refiner (with diffusers) stopped working. The base model works correctly, does the steps, but then when the refiner starts I get this message:

Model expects an added time embedding vector of length 2560, but a vector of 2816 was created. Please make sure to disable `requires_aesthetics_score` with 
`pipe.register_to_config(requires_aesthetics_score=False)` to make sure `target_size` (1024, 1024) is correctly used by the model.

But requires_aesthetics_score is disabled. I tried enabling it, same result.

Version Platform Description

Ubuntu, linux, kernel 6.2.0-35-generic, nvidia driver 535. Python 3.10.12 on Linux

Version: app=sd.next updated=2023-12-12 hash=69bda18e url=https://github.com/vladmandic/automatic/tree/master
13:46:54-930236 INFO Platform: arch=x86_64 cpu=x86_64 system=Linux release=6.2.0-35-generic python=3.10.12

Relevant log output

13:46:54-525652 INFO     Starting SD.Next                                                                                                                                                                                          
13:46:54-528921 INFO     Logger: file="/home/mraggi/sources/SD/vlad/automatic/sdnext.log" level=INFO size=629427 mode=append                                                                                                       
13:46:54-529925 INFO     Python 3.10.12 on Linux                                                                                                                                                                                   
13:46:54-544891 INFO     Version: app=sd.next updated=2023-12-12 hash=69bda18e url=https://github.com/vladmandic/automatic/tree/master                                                                                             
13:46:54-930236 INFO     Platform: arch=x86_64 cpu=x86_64 system=Linux release=6.2.0-35-generic python=3.10.12                                                                                                                     
13:46:54-934418 INFO     nVidia CUDA toolkit detected: nvidia-smi present                                                                                                                                                          
13:46:54-961390 INFO     Extensions: disabled=[]                                                                                                                                                                                   
13:46:54-963069 INFO     Extensions: enabled=['Lora', 'stable-diffusion-webui-rembg', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'sd-extension-system-info', 'sd-webui-agent-scheduler',                      
                         'clip-interrogator-ext', 'sd-extension-chainner'] extensions-builtin                                                                                                                                      
13:46:54-966140 INFO     Extensions: enabled=['sd-webui-deoldify', 'sd-webui-reactor', 'sd-webui-infinite-image-browsing'] extensions                                                                                              
13:46:54-968467 INFO     Startup: standard                                                                                                                                                                                         
13:46:54-969706 INFO     Verifying requirements                                                                                                                                                                                    
13:46:54-997706 INFO     Verifying packages                                                                                                                                                                                        
13:46:55-000749 INFO     Verifying submodules                                                                                                                                                                                      
13:47:10-315641 INFO     Extensions enabled: ['Lora', 'stable-diffusion-webui-rembg', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'sd-extension-system-info', 'sd-webui-agent-scheduler',                      
                         'clip-interrogator-ext', 'sd-extension-chainner', 'sd-webui-deoldify', 'sd-webui-reactor', 'sd-webui-infinite-image-browsing']                                                                            
13:47:10-321391 INFO     Verifying requirements                                                                                                                                                                                    
13:47:10-342207 INFO     Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}                                                                                                                                         
13:47:10-344048 INFO     Command line args: ['--port', '3123', '--medvram'] medvram=True port=3123                                                                                                                                 
13:47:13-692087 INFO     Load packages: torch=2.1.0+cu121 diffusers=0.24.0 gradio=3.43.2                                                                                                                                           
13:47:14-157340 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda mode=no_grad device=cuda cross-optimization="Scaled-Dot-Product"                                                                                           
13:47:14-186515 INFO     Device: device=NVIDIA GeForce RTX 2080 Ti n=2 arch=sm_90 cap=(7, 5) cuda=12.1 cudnn=8902 driver=535.113.01                                                                                                
                         535.113.01                                                                                                                                                                                                
13:47:16-333375 INFO     Available VAEs: path="models/VAE" items=0                                                                                                                                                                 
13:47:16-334790 INFO     Disabling uncompatible extensions: backend=Backend.DIFFUSERS ['sd-webui-controlnet', 'multidiffusion-upscaler-for-automatic1111', 'a1111-sd-webui-lycoris', 'sd-webui-animatediff']                       
13:47:16-336233 INFO     Available models: path="models/Stable-diffusion" items=4 time=0.00                                                                                                                                        
13:47:17-334257 INFO     Extension: script='extensions-builtin/sd-webui-agent-scheduler/scripts/task_scheduler.py' Using sqlite file: extensions-builtin/sd-webui-agent-scheduler/task_scheduler.sqlite3                           
/home/mraggi/sources/SD/vlad/automatic/venv/lib/python3.10/site-packages/numba/np/ufunc/parallel.py:371: NumbaWarning: The TBB threading layer requires TBB version 2021 update 6 or later i.e., TBB_INTERFACE_VERSION >= 12060. Found TBB_INTERFACE_VERSION = 12050. The TBB threading layer is disabled.
  warnings.warn(problem)
13:47:18-347854 INFO     Extensions time: 1.88 { clip-interrogator-ext=0.49 Lora=0.06 sd-webui-agent-scheduler=0.25 stable-diffusion-webui-rembg=0.37 sd-webui-deoldify=0.40 sd-webui-reactor=0.17 }                               
13:47:18-543704 INFO     Load UI theme: name="amethyst-nightfall" style=Auto base=sdnext.css                                                                                                                                       
13:47:20-369117 INFO     Local URL: http://127.0.0.1:3123/                                                                                                                                                                         
13:47:20-371226 INFO     Initializing middleware                                                                                                                                                                                   
13:47:20-581432 INFO     [AgentScheduler] Task queue is empty                                                                                                                                                                      
13:47:20-584804 INFO     [AgentScheduler] Registering APIs                                                                                                                                                                         
13:47:20-729222 INFO     Startup time: 10.37 { torch=2.87 gradio=0.44 libraries=2.64 extensions=1.88 face-restore=0.13 ui-extra-networks=0.18 ui-img2img=0.06 ui-settings=0.20 ui-extensions=0.90 ui-defaults=0.09 launch=0.42     
                         api=0.13 app-started=0.23 }                                                                                                                                                                               
13:47:55-927496 INFO     MOTD: N/A                                                                                                                                                                                                 
13:48:01-933152 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36                                                   
13:48:16-377037 INFO     Select: model="Diffusers/stabilityai/stable-diffusion-xl-base-1.0 [models/Dif]"                                                                                                                           
13:48:16-448656 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=no_grad fp16=True bf16=False                                                                       
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00,  5.70it/s]
13:48:17-758252 INFO     Load embeddings: loaded=0 skipped=0 time=0.00                                                                                                                                                             
13:48:18-050813 INFO     Load model: time=1.38 { load=1.38 } native=1024 {'ram': {'used': 1.48, 'total': 125.72}, 'gpu': {'used': 0.18, 'total': 10.75}, 'retries': 0, 'oom': 0}                                                   
13:48:18-052873 INFO     Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2                                                                                                                                                              
13:48:18-053846 INFO     Applying hypertile: unet=256                                                                                                                                                                              
13:48:18-067345 INFO     Select: refiner="Diffusers/stabilityai/stable-diffusion-xl-refiner-1.0 [models/Dif]"                                                                                                                      
13:48:18-069783 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=no_grad fp16=True bf16=False                                                                       
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00,  6.87it/s]
13:48:18-828577 INFO     Load embeddings: loaded=0 skipped=0 time=0.00                                                                                                                                                             
13:48:19-120113 INFO     Load refiner: time=0.76 { load=0.76 } native=1024 {'ram': {'used': 1.59, 'total': 125.72}, 'gpu': {'used': 0.18, 'total': 10.75}, 'retries': 0, 'oom': 0}                                                 
Progress  2.08it/s █████████████████████████████████ 100% 32/32 00:15 00:00 Base
13:48:39-053890 ERROR    Exception: Model expects an added time embedding vector of length 2560, but a vector of 2816 was created. Please make sure to disable `requires_aesthetics_score` with                                    
                         `pipe.register_to_config(requires_aesthetics_score=False)` to make sure `target_size` (1024, 1024) is correctly used by the model.                                                                        
13:48:39-056155 ERROR    Arguments: args=('task(9u7yshm2bkh5a3j)', 'Professional photograph of a fantasy floating magic tower, fantasy art, sci fi, masterpiece, best quality, (Unreal Engine 5:0.5), dense and varied vegetation, 
                         flowers, vibrant colors', 'watermark, blurry, low quality, worst quality, black and white, out of frame, grainy', [], 32, 0, 0, True, False, False, 1, 1, 6, 6, 0.7, 1, -1.0, -1.0, 0, 0, 0, 1024, 1024,  
                         True, 0.3, 2, 'None', False, 20, 0, 0, 16, 0.75, '', '', False, 4, 0.95, False, 1, 1, False, 0.6, 1, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 0, '', [], 0, '', [], 0, '', [],      
                         False, True, False, False, False, False, 0, 'None', 16, 'None', 1, False, 'None', 2, True, 1, 0, 'none', 0.5, None, None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1,     
                         False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None) kwargs={}                                                                                                              
13:48:39-059833 ERROR    gradio call: ValueError                                                                                                                                                                                   
╭───────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────╮
│ /home/mraggi/sources/SD/vlad/automatic/modules/call_queue.py:31 in f                                                                                                                                 │
│                                                                                                                                                                                                      │
│   30 │   │   │   try:                                                                                                                                                                                │
│ ❱ 31 │   │   │   │   res = func(*args, **kwargs)                                                                                                                                                     │
│   32 │   │   │   │   progress.record_results(id_task, res)                                                                                                                                           │
│                                                                                                                                                                                                      │
│ /home/mraggi/sources/SD/vlad/automatic/modules/txt2img.py:69 in txt2img                                                                                                                              │
│                                                                                                                                                                                                      │
│   68 │   if processed is None:                                                                                                                                                                       │
│ ❱ 69 │   │   processed = processing.process_images(p)                                                                                                                                                │
│   70 │   p.close()                                                                                                                                                                                   │
│                                                                                                                                                                                                      │
│ /home/mraggi/sources/SD/vlad/automatic/modules/processing.py:734 in process_images                                                                                                                   │
│                                                                                                                                                                                                      │
│    733 │   │   │   with context_hypertile_vae(p), context_hypertile_unet(p):                                                                                                                         │
│ ❱  734 │   │   │   │   res = process_images_inner(p)                                                                                                                                                 │
│    735                                                                                                                                                                                               │
│                                                                                                                                                                                                      │
│ /home/mraggi/sources/SD/vlad/automatic/modules/processing.py:886 in process_images_inner                                                                                                             │
│                                                                                                                                                                                                      │
│    885 │   │   │   │   from modules.processing_diffusers import process_diffusers                                                                                                                    │
│ ❱  886 │   │   │   │   x_samples_ddim = process_diffusers(p, p.seeds, p.prompts, p.negative_pro                                                                                                      │
│    887 │   │   │   else:                                                                                                                                                                             │
│                                                                                                                                                                                                      │
│ /home/mraggi/sources/SD/vlad/automatic/modules/processing_diffusers.py:637 in process_diffusers                                                                                                      │
│                                                                                                                                                                                                      │
│   636 │   │   │   try:                                                                                                                                                                               │
│ ❱ 637 │   │   │   │   refiner_output = shared.sd_refiner(**refiner_args) # pylint: disable=not                                                                                                       │
│   638 │   │   │   except AssertionError as e:                                                                                                                                                        │
│                                                                                                                                                                                                      │
│ /home/mraggi/sources/SD/vlad/automatic/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in decorate_context                                                                          │
│                                                                                                                                                                                                      │
│   114 │   │   with ctx_factory():                                                                                                                                                                    │
│ ❱ 115 │   │   │   return func(*args, **kwargs)                                                                                                                                                       │
│   116                                                                                                                                                                                                │
│                                                                                                                                                                                                      │
│ /home/mraggi/sources/SD/vlad/automatic/venv/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py:1237 in __call__                            │
│                                                                                                                                                                                                      │
│   1236 │   │                                                                                                                                                                                         │
│ ❱ 1237 │   │   add_time_ids, add_neg_time_ids = self._get_add_time_ids(                                                                                                                              │
│   1238 │   │   │   original_size,                                                                                                                                                                    │
│                                                                                                                                                                                                      │
│ /home/mraggi/sources/SD/vlad/automatic/venv/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl_img2img.py:795 in _get_add_time_ids                    │
│                                                                                                                                                                                                      │
│    794 │   │   ):                                                                                                                                                                                    │
│ ❱  795 │   │   │   raise ValueError(                                                                                                                                                                 │
│    796 │   │   │   │   f"Model expects an added time embedding vector of length {expected_add_e                                                                                                      │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Model expects an added time embedding vector of length 2560, but a vector of 2816 was created. Please make sure to disable `requires_aesthetics_score` with `pipe.register_to_config(requires_aesthetics_score=False)` 
to make sure `target_size` (1024, 1024) is correctly used by the model.
13:48:47-139643 INFO     Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2                                                                                                                                                              
13:48:47-141551 INFO     Applying hypertile: unet=256                                                                                                                                                                              
Progress  2.05it/s █████████████████████████████████ 100% 32/32 00:15 00:00 Base
13:49:15-477664 INFO     Processed: images=1 time=28.32 its=1.13 memory={'ram': {'used': 14.46, 'total': 125.72}, 'gpu': {'used': 2.67, 'total': 10.75}, 'retries': 0, 'oom': 0}

Backend

Diffusers

Branch

Master

Model

SD-XL

Acknowledgements

vladmandic commented 11 months ago

i cannot reproduce the problem:

mraggi commented 11 months ago

I downloaded the models from hugginface from within the app (it says safetensors...). I've disabled free-u and hypertile, started with --safe and I get the exact same message when doing the refiner steps.

vladmandic commented 11 months ago

and how are you loading it? this may be that you're trying to load refiner model as base model.

mraggi commented 11 months ago

Hi.

Just an update: I downloaded the .safetensors one and told sd next to use that one and that one works fine. So the one that doesn't work was the one downloaded automatically from within the app.

Weird.

vladmandic commented 11 months ago

weird. but good lead, i'll check.

vladmandic commented 11 months ago

i tracked it down, fixed in dev branch.

FieldMarshallVague commented 11 months ago

Hi, I am also getting this problem. I've tried all the above checks (inc. starting with --safe). I downloaded the base and refiner models from HF with the links from the SDXL installation in the wiki.

But I get the same error:

ValueError: Model expects an added time embedding vector of length 2560, but a vector of 2816 was created. Please make sure to disable
`requires_aesthetics_score` with `pipe.register_to_config(requires_aesthetics_score=False)` to make sure `target_size` (768, 1152) is correctly
used by the model.

Here is the interface showing settings and error, just for reference:

image

Please Note: I was unable to run SDXL models at all until I checked 'Enable model CPU offload (--medvram)'. Previously this just seemed to time out with 99% GPU memory usage.

I am running on a 4090 with 13900k and 64GB ram.

I've included my --safe --debug output for extra info.

Details

(vlad) H:\airt\vladdiffusion>call .\webui.bat --safe --debug Using VENV: H:\airt\vladdiffusion\venv 16:23:03-539673 INFO Starting SD.Next 16:23:03-541673 INFO Logger: file="H:\airt\vladdiffusion\sdnext.log" level=DEBUG size=65 mode=create 16:23:03-542673 INFO Python 3.10.10 on Windows 16:23:03-636243 INFO Version: app=sd.next updated=2023-12-29 hash=f4d4f8da url=https://github.com/vladmandic/automatic.git/tree/master 16:23:04-037299 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 183 Stepping 1, GenuineIntel system=Windows release=Windows-10-10.0.22621-SP0 python=3.10.10 16:23:04-038299 DEBUG Setting environment tuning 16:23:04-039299 DEBUG Cache folder: C:\Users\me\.cache\huggingface\hub 16:23:04-040298 DEBUG Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False 16:23:04-041299 DEBUG Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True 16:23:04-045298 INFO nVidia CUDA toolkit detected: nvidia-smi present 16:23:04-077299 WARNING Modified files: ['vladdiffusion.code-workspace'] 16:23:04-105299 DEBUG Repository update time: Fri Dec 29 14:32:52 2023 16:23:04-106301 INFO Startup: standard 16:23:04-107299 INFO Verifying requirements 16:23:04-115301 INFO Verifying packages 16:23:04-116302 INFO Verifying submodules 16:23:05-903076 DEBUG Submodule: extensions-builtin/sd-extension-chainner / main 16:23:05-949143 DEBUG Submodule: extensions-builtin/sd-extension-system-info / main 16:23:05-995144 DEBUG Submodule: extensions-builtin/sd-webui-agent-scheduler / main 16:23:06-040113 DEBUG Submodule: extensions-builtin/sd-webui-controlnet / main 16:23:06-089051 DEBUG Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main 16:23:06-133564 DEBUG Submodule: extensions-builtin/stable-diffusion-webui-rembg / master 16:23:06-179564 DEBUG Submodule: modules/k-diffusion / master 16:23:06-224075 DEBUG Submodule: modules/lora / main 16:23:06-271075 DEBUG Submodule: wiki / master 16:23:06-297075 DEBUG Register paths 16:23:06-362588 DEBUG Installed packages: 219 16:23:06-364204 DEBUG Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg'] 16:23:06-491717 DEBUG Running extension installer: H:\airt\vladdiffusion\extensions-builtin\sd-extension-system-info\install.py 16:23:06-737247 DEBUG Running extension installer: H:\airt\vladdiffusion\extensions-builtin\sd-webui-agent-scheduler\install.py 16:23:06-981738 DEBUG Running extension installer: H:\airt\vladdiffusion\extensions-builtin\sd-webui-controlnet\install.py 16:23:07-219109 DEBUG Running extension installer: H:\airt\vladdiffusion\extensions-builtin\stable-diffusion-webui-images-browser\install.py 16:23:07-462252 DEBUG Running extension installer: H:\airt\vladdiffusion\extensions-builtin\stable-diffusion-webui-rembg\install.py 16:23:07-713106 INFO Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg'] 16:23:07-713969 INFO Verifying requirements 16:23:07-728696 DEBUG Setup complete without errors: 1703866988 16:23:07-729697 INFO Running in safe mode without user extensions 16:23:07-733697 INFO Extension preload: {'extensions-builtin': 0.0} 16:23:07-735696 DEBUG Starting module: 16:23:07-736696 INFO Command line args: ['--safe', '--debug'] debug=True safe=True 16:23:07-737696 DEBUG Env flags: [] 16:23:10-470172 INFO Load packages: torch=2.1.1+cu121 diffusers=0.25.0 gradio=3.43.2 16:23:10-985292 DEBUG Read: file="config.json" json=34 bytes=1503 time=0.000 16:23:10-987293 DEBUG Unknown settings: ['control_net_models_path', 'control_net_model_cache_size'] 16:23:10-988292 INFO Engine: backend=Backend.DIFFUSERS compute=cuda mode=no_grad device=cuda cross-optimization="Scaled-Dot-Product" 16:23:11-023687 INFO Device: device=NVIDIA GeForce RTX 4090 n=1 arch=sm_90 cap=(8, 9) cuda=12.1 cudnn=8801 driver=546.33 16:23:12-746083 DEBUG Entering start sequence 16:23:12-748983 DEBUG Initializing 16:23:12-749987 INFO Available VAEs: path="H:\airt\models\sd\VAE" items=1 16:23:12-750983 INFO Disabled extensions: ['sd-extension-chainner', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg'] 16:23:12-752983 DEBUG Scanning diffusers cache: H:\airt\models\sd\Diffusers H:\airt\models\sd\Diffusers items=1 time=0.00 16:23:12-753983 DEBUG Read: file="cache.json" json=2 bytes=1439 time=0.000 16:23:12-755983 DEBUG Read: file="metadata.json" json=31 bytes=16882 time=0.001 16:23:12-757983 INFO Available models: path="H:\airt\models\sd\Stable-diffusion" items=24 time=0.01 16:23:12-894349 DEBUG Load extensions 16:23:12-955112 INFO Extension: script='scripts\faceid.py' [2;36m16:23:12-953318[0m[2;36m [0m[1;31mERROR [0m FaceID: No module named [32m'insightface'[0m 16:23:13-415485 INFO Extensions time: 0.52 { vladdiffusion=0.07 Lora=0.44 } 16:23:13-442992 DEBUG Read: file="html/upscalers.json" json=4 bytes=2672 time=0.000 16:23:13-445301 DEBUG Upscaler type=ESRGAN folder="H:\airt\models\sd\ESRGAN" model="4x-UltraSharp" path="H:\airt\models\sd\ESRGAN\4x-UltraSharp.pth" 16:23:13-446301 DEBUG Upscaler type=ESRGAN folder="H:\airt\models\sd\ESRGAN" model="Deoldify790000" path="H:\airt\models\sd\ESRGAN\Deoldify790000.pth" 16:23:13-447301 DEBUG Upscaler type=ESRGAN folder="H:\airt\models\sd\ESRGAN" model="ESRGAN_4x" path="H:\airt\models\sd\ESRGAN\ESRGAN_4x.pth" 16:23:13-448301 DEBUG Upscaler type=ESRGAN folder="H:\airt\models\sd\ESRGAN" model="LADDIER1_282500_G" path="H:\airt\models\sd\ESRGAN\LADDIER1_282500_G.pth" 16:23:13-450301 DEBUG Load upscalers: total=32 downloaded=5 user=4 time=0.03 ['None', 'Lanczos', 'Nearest', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR'] 16:23:13-459301 DEBUG Load styles: folder="H:\airt\models\sd\styles" items=288 time=0.01 16:23:13-462301 DEBUG Creating UI 16:23:13-463302 INFO Load UI theme: name="black-teal" style=Auto base=sdnext.css 16:23:13-475766 DEBUG Read: file="html\reference.json" json=31 bytes=16496 time=0.000 16:23:13-493767 DEBUG Extra networks: page='model' items=55 subfolders=4 tab=txt2img folders=['H:\\airt\\models\\sd\\Stable-diffusion', 'H:\\airt\\models\\sd\\Diffusers', 'models\\Reference'] list=0.01 desc=0.00 info=0.00 workers=2 16:23:13-502767 DEBUG Extra networks: page='style' items=288 subfolders=2 tab=txt2img folders=['H:\\airt\\models\\sd\\styles', 'html'] list=0.01 desc=0.00 info=0.00 workers=2 16:23:13-505029 DEBUG Extra networks: page='embedding' items=4 subfolders=1 tab=txt2img folders=['H:\\airt\\models\\sd\\embeddings'] list=0.00 desc=0.00 info=0.00 workers=2 16:23:13-506639 DEBUG Extra networks: page='hypernetwork' items=0 subfolders=1 tab=txt2img folders=['H:\\airt\\models\\sd\\hypernetworks'] list=0.00 desc=0.00 info=0.00 workers=2 16:23:13-507937 DEBUG Extra networks: page='vae' items=1 subfolders=1 tab=txt2img folders=['H:\\airt\\models\\sd\\VAE'] list=0.00 desc=0.00 info=0.00 workers=2 16:23:13-508744 DEBUG Extra networks: page='lora' items=4 subfolders=1 tab=txt2img folders=['H:\\airt\\models\\sd\\Lora', 'H:\\airt\\models\\sd\\LyCORIS'] list=0.00 desc=0.00 info=0.00 workers=2 16:23:13-634969 DEBUG Control initialize: models=H:\airt\models\sd\control 16:23:13-864461 DEBUG Read: file="ui-config.json" json=33 bytes=1424 time=0.003 16:23:14-028047 DEBUG Themes: builtin=9 default=5 external=55 16:23:14-060241 DEBUG Read: file="H:\airt\vladdiffusion\html\extensions.json" json=330 bytes=193286 time=0.001 16:23:14-062241 ERROR Failed reading extension data from Git repository: sd-extension-chainner: [Errno 2] No such file or directory: 'H:\\airt\\vladdiffusion\\.git\\modules\\extensions-builtin\\sd-extension-chainner\\description' 16:23:14-064242 ERROR Failed reading extension data from Git repository: sd-extension-system-info: [Errno 2] No such file or directory: 'H:\\airt\\vladdiffusion\\.git\\modules\\extensions-builtin\\sd-extension-system-info\\description' 16:23:14-348139 ERROR Failed reading extension data from Git repository: sd-webui-agent-scheduler: [Errno 2] No such file or directory: 'H:\\airt\\vladdiffusion\\.git\\modules\\extensions-builtin\\sd-webui-agent-scheduler\\description' 16:23:14-350138 ERROR Failed reading extension data from Git repository: sd-webui-controlnet: [Errno 2] No such file or directory: 'H:\\airt\\vladdiffusion\\.git\\modules\\extensions-builtin\\sd-webui-controlnet\\description' 16:23:14-352137 ERROR Failed reading extension data from Git repository: stable-diffusion-webui-images-browser: [Errno 2] No such file or directory: 'H:\\airt\\vladdiffusion\\.git\\modules\\extensions-builtin\\stable-diffusion-webui-images-browser\\description' 16:23:14-639661 ERROR Failed reading extension data from Git repository: stable-diffusion-webui-rembg: [Errno 2] No such file or directory: 'H:\\airt\\vladdiffusion\\.git\\modules\\extensions-builtin\\stable-diffusion-webui-rembg\\description' 16:23:14-666791 DEBUG Extension list: processed=320 installed=7 enabled=7 disabled=0 visible=320 hidden=0 16:23:14-908615 INFO Local URL: http://127.0.0.1:7860/ 16:23:14-909414 DEBUG Gradio functions: registered=1286 16:23:14-910420 INFO Initializing middleware 16:23:14-913400 DEBUG Creating API 16:23:14-988866 DEBUG Scripts setup: [] 16:23:14-989866 DEBUG Model metadata: file="metadata.json" no changes 16:23:14-990867 DEBUG Model auto load disabled 16:23:14-991866 DEBUG Save: file="config.json" json=34 bytes=1457 time=0.000 16:23:14-992867 DEBUG Unused settings: ['control_net_models_path', 'control_net_model_cache_size'] 16:23:14-993867 INFO Startup time: 7.25 { torch=2.08 gradio=0.62 libraries=2.28 extensions=0.52 face-restore=0.14 ui-extra-networks=0.10 ui-txt2img=0.05 ui-extras=0.13 ui-settings=0.20 ui-extensions=0.62 ui-defaults=0.08 launch=0.15 api=0.05 } 16:23:22-743885 INFO MOTD: N/A 16:23:23-994810 DEBUG Themes: builtin=9 default=5 external=55 16:23:24-379393 INFO Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:121.0) Gecko/20100101 Firefox/121.0 16:23:32-519144 DEBUG Model requested: fn=txt2img 16:23:32-521143 INFO Select: model="SDXL\sd_xl_base_1.0 [31e35c80fc]" 16:23:32-523146 DEBUG Load model weights: existing=False target=H:\airt\models\sd\Stable-diffusion\SDXL\sd_xl_base_1.0.safetensors info=None Loading model: H:\airt\models\sd\Stable-diffusion\SDXL\sd_xl_base_1.0.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/6.9 GB -:--:-- 16:23:32-574653 INFO Torch override dtype: no-half set 16:23:32-575653 INFO Torch override VAE dtype: no-half set 16:23:32-576654 DEBUG Desired Torch parameters: dtype=BF16 no-half=True no-half-vae=True upscast=False 16:23:32-577656 INFO Setting Torch parameters: device=cuda dtype=torch.float32 vae=torch.float32 unet=torch.float32 context=no_grad fp16=False bf16=True 16:23:32-578653 DEBUG Diffusers loading: path="H:\airt\models\sd\Stable-diffusion\SDXL\sd_xl_base_1.0.safetensors" 16:23:32-579653 INFO Autodetect: model="Stable Diffusion XL" class=StableDiffusionXLPipeline file="H:\airt\models\sd\Stable-diffusion\SDXL\sd_xl_base_1.0.safetensors" size=6617MB 16:23:36-018653 DEBUG Setting model: pipeline=StableDiffusionXLPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float32, 'load_connected_pipeline': True, 'extract_ema': True, 'force_zeros_for_empty_prompt ': True, 'requires_aesthetics_score': False, 'use_safetensors': True} 16:23:36-019653 DEBUG Setting model VAE: name=None upcast=True 16:23:36-020653 DEBUG Setting model: enable model CPU offload 16:23:36-029655 DEBUG Setting model: enable VAE slicing 16:23:36-030655 DEBUG Setting model: enable VAE tiling 16:23:36-045678 INFO Load embeddings: loaded=0 skipped=4 time=0.00 16:23:36-232188 DEBUG gc: collected=5951 device=cuda {'ram': {'used': 14.01, 'total': 63.77}, 'gpu': {'used': 1.52, 'total': 23.99}, 'retries': 0, 'oom': 0} 16:23:36-237187 INFO Load model: time=3.52 { load=3.52 } native=1024 {'ram': {'used': 14.01, 'total': 63.77}, 'gpu': {'used': 1.52, 'total': 23.99}, 'retries': 0, 'oom': 0} 16:23:36-255979 INFO Select: refiner="SDXL\sd_xl_refiner_1.0 [7440042bbd]" 16:23:36-256984 DEBUG Load model weights: existing=False target=H:\airt\models\sd\Stable-diffusion\SDXL\sd_xl_refiner_1.0.safetensors info=None Loading model: H:\airt\models\sd\Stable-diffusion\SDXL\sd_xl_refiner_1.0.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/6.1 GB -:--:-- 16:23:36-289982 INFO Torch override dtype: no-half set 16:23:36-290983 INFO Torch override VAE dtype: no-half set 16:23:36-291984 DEBUG Desired Torch parameters: dtype=BF16 no-half=True no-half-vae=True upscast=False 16:23:36-292983 INFO Setting Torch parameters: device=cuda dtype=torch.float32 vae=torch.float32 unet=torch.float32 context=no_grad fp16=False bf16=True 16:23:36-293891 DEBUG Diffusers loading: path="H:\airt\models\sd\Stable-diffusion\SDXL\sd_xl_refiner_1.0.safetensors" 16:23:36-294336 INFO Autodetect: refiner="Stable Diffusion XL" class=StableDiffusionXLPipeline file="H:\airt\models\sd\Stable-diffusion\SDXL\sd_xl_refiner_1.0.safetensors" size=5795MB 16:23:38-782292 DEBUG Setting refiner: pipeline=StableDiffusionXLPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float32, 'load_connected_pipeline': True, 'extract_ema': True, 'force_zeros_for_empty_prompt ': True, 'requires_aesthetics_score': False, 'use_safetensors': True} 16:23:38-784341 DEBUG Setting refiner VAE: name=None upcast=True 16:23:38-785293 DEBUG Setting refiner: enable model CPU offload 16:23:38-793293 DEBUG Setting refiner: enable VAE slicing 16:23:38-794292 DEBUG Setting refiner: enable VAE tiling 16:23:38-804292 INFO Load embeddings: loaded=0 skipped=4 time=0.00 16:23:38-986403 DEBUG gc: collected=1585 device=cuda {'ram': {'used': 25.36, 'total': 63.77}, 'gpu': {'used': 1.52, 'total': 23.99}, 'retries': 0, 'oom': 0} 16:23:38-992402 INFO Load refiner: time=2.55 { load=2.55 } native=1024 {'ram': {'used': 25.36, 'total': 63.77}, 'gpu': {'used': 1.52, 'total': 23.99}, 'retries': 0, 'oom': 0} 16:23:40-060032 DEBUG Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 6.5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_end': 0.8, 'width': 768, 'height': 512, 'parser': 'Full parser'} 16:23:40-096033 DEBUG Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'rescale_betas_zero_snr': False} Progress 1.47it/s ████▎ 12% 3/24 00:02 00:14 Base16:23:42-739690 DEBUG Load VAE decode approximate: model="H:\airt\models\sd\VAE-approx\model.pt" Progress 5.16it/s █████████████████████████████████ 100% 24/24 00:04 00:00 Base 16:23:46-549590 DEBUG Init hires: upscaler="Latent (nearest-exact)" sampler="DPM++ 2M" resize=0x0 upscale=1152x768 16:23:46-551591 INFO Hires: upscaler=Latent (nearest-exact) width=1152 height=768 images=1 16:23:46-776515 DEBUG Pipeline class change: original=StableDiffusionXLPipeline target=StableDiffusionXLImg2ImgPipeline 16:23:46-778516 DEBUG Sampler: sampler="DPM++ 2M" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'thresholding': False, 'sample_max_value': 1.0, 'algorithm_type': 'sde-dpmsolver++', 'solver_type': 'midpoint', 'lower_order_final': True, 'use_karras_sigmas': True} 16:23:47-362950 DEBUG Diffuser pipeline: StableDiffusionXLImg2ImgPipeline task=DiffusersTaskType.IMAGE_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 4.6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 26, 'eta': 1.0, 'guidance_rescale': 0.7, 'image': , 'strength': 0.6, 'parser': 'Full parser'} Progress 2.18it/s ████████████████████████████████ 100% 15/15 00:06 00:00 Hires 16:23:55-562352 DEBUG Sampler: sampler="DPM++ 2M" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'thresholding': False, 'sample_max_value': 1.0, 'algorithm_type': 'sde-dpmsolver++', 'solver_type': 'midpoint', 'lower_order_final': True, 'use_karras_sigmas': True} 16:23:55-565352 DEBUG Pipeline class change: original=StableDiffusionXLImg2ImgPipeline target=StableDiffusionXLPipeline 16:23:55-567352 DEBUG Pipeline class change: original=StableDiffusionXLPipeline target=StableDiffusionXLImg2ImgPipeline 16:23:56-305792 DEBUG Diffuser pipeline: StableDiffusionXLImg2ImgPipeline task=DiffusersTaskType.IMAGE_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 1280]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 1280]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 4.6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 21, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_start': 0.8, 'denoising_end': 1, 'image': , 'parser': 'Full parser'} 16:23:56-318792 ERROR Exception: Model expects an added time embedding vector of length 2560, but a vector of 2816 was created. Please make sure to disable `requires_aesthetics_score` with `pipe.register_to_config(requires_aesthetics_score=False)` to make sure `target_size` (768, 1152) is correctly used by the model. 16:23:56-320792 ERROR Arguments: args=('task(ir5a0k0n34h22z6)', 'an impressionist oil painting artwork of a fisherman on the open sea, in a stormy sea, colorful sky, painting', '(blurry:1.3), worst quality, 3D, cgi,', [], 24, 13, 10, False, False, False, 1, 1, 6.5, 4.6, 0.7, 1, -1.0, -1.0, 0, 0, 0, 512, 768, True, 0.6, 1.5, 'Latent (nearest-exact)', True, 15, 0, 0, 8, 0.8, '', '', False, 4, 0.95, False, 1, 1, False, 0.6, 4, [], 0, 3, 1, 1, 0.8, 8, 64, True, False, False, 'positive', 'comma', 0, False, False, '', 'None', True, 0, 'None', 2, True, 1, 0, 0, '', [], 0, '', [], 0, '', [], False, True, False, False, False, False, 0, 'None', 16, 'None', 1, True, 'None', 2, True, 1, 0, True, 'none', 0.5, None) kwargs={} 16:23:56-323792 ERROR gradio call: ValueError ╭───────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────╮ │ H:\airt\vladdiffusion\modules\call_queue.py:31 in f │ │ │ │ 30 │ │ │ try: │ │ ❱ 31 │ │ │ │ res = func(*args, **kwargs) │ │ 32 │ │ │ │ progress.record_results(id_task, res) │ │ │ │ H:\airt\vladdiffusion\modules\txt2img.py:88 in txt2img │ │ │ │ 87 │ if processed is None: │ │ ❱ 88 │ │ processed = processing.process_images(p) │ │ 89 │ p.close() │ │ │ │ H:\airt\vladdiffusion\modules\processing.py:760 in process_images │ │ │ │ 759 │ │ │ with context_hypertile_vae(p), context_hypertile_unet(p): │ │ ❱ 760 │ │ │ │ res = process_images_inner(p) │ │ 761 │ │ │ │ H:\airt\vladdiffusion\modules\processing.py:922 in process_images_inner │ │ │ │ 921 │ │ │ │ from modules.processing_diffusers import process_diffusers │ │ ❱ 922 │ │ │ │ x_samples_ddim = process_diffusers(p) │ │ 923 │ │ │ else: │ │ │ │ H:\airt\vladdiffusion\modules\processing_diffusers.py:577 in process_diffusers │ │ │ │ 576 │ │ │ │ shared.sd_refiner.register_to_config(requires_aesthetics_score=shared.op │ │ ❱ 577 │ │ │ │ refiner_output = shared.sd_refiner(**refiner_args) # pylint: disable=not │ │ 578 │ │ │ │ downcast_openvino(op="refiner") │ │ │ │ H:\airt\vladdiffusion\venv\lib\site-packages\torch\utils\_contextlib.py:115 in decorate_context │ │ │ │ 114 │ │ with ctx_factory(): │ │ ❱ 115 │ │ │ return func(*args, **kwargs) │ │ 116 │ │ │ │ H:\airt\vladdiffusion\venv\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl_img2img.py:1315 in │ │ __call__ │ │ │ │ 1314 │ │ │ │ ❱ 1315 │ │ add_time_ids, add_neg_time_ids = self._get_add_time_ids( │ │ 1316 │ │ │ original_size, │ │ │ │ H:\airt\vladdiffusion\venv\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl_img2img.py:807 in │ │ _get_add_time_ids │ │ │ │ 806 │ │ ): │ │ ❱ 807 │ │ │ raise ValueError( │ │ 808 │ │ │ │ f"Model expects an added time embedding vector of length {expected_add_e │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ValueError: Model expects an added time embedding vector of length 2560, but a vector of 2816 was created. Please make sure to disable `requires_aesthetics_score` with `pipe.register_to_config(requires_aesthetics_score=False)` to make sure `target_size` (768, 1152) is correctly used by the model. 16:24:00-394961 DEBUG Server: alive=True jobs=1 requests=58 uptime=49 memory=17.83/63.77 backend=Backend.DIFFUSERS state=job="txt2img" 0/-1

noahhaon commented 10 months ago

I got this error too on latest main and dev branches, but enabling Require aesthetics score in the config fixed it.