Bing-su / adetailer

Auto detecting, masking and inpainting with detection model.
GNU Affero General Public License v3.0
4.13k stars 317 forks source link

[Bug]: SD.NEXT Adetailer Control TypeError: image must be passed and be one of PIL image, numpy array, torch tensor, list of PIL images, list of numpy arrays or list of torch tensors, but is <class 'NoneType'> #506

Closed IowaSovereign closed 7 months ago

IowaSovereign commented 7 months ago

Describe the bug

Both the newest at the time SD.NEXT master branch 3c952675 and dev branch d76136fb (reverted to avoid an unrelated issue) Windows 11 24H2, GTX 1080Ti 11Gb Issue seems browser agnostic, tried with Mozilla Firefox, Microsoft Edge, and Chrome

When using control tab, with any processor and any controlnet model, in either of three modes of image input the following happens: The generation proceeds as normal, until adetailer initialises, where it detects the face correctly, which shows on the TAESD preview, but when it begins generating it fails with an error message.

│ ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮             │
│ │ E:\SD.Next_Dev\extensions\adetailer\adetailer\traceback.py:139 in wrapper                        │             │
│ │                                                                                                  │             │
│ │   138 │   │   try:                                                                               │             │
│ │ ❱ 139 │   │   │   return func(*args, **kwargs)                                                   │             │
│ │   140 │   │   except Exception as e:                                                             │             │
│ │                                                                                                  │             │
│ │ E:\SD.Next_Dev\extensions\adetailer\scripts\!adetailer.py:764 in postprocess_image               │             │
│ │                                                                                                  │             │
│ │    763 │   │   │   │   │   continue                                                              │             │
│ │ ❱  764 │   │   │   │   is_processed |= self._postprocess_image_inner(p, pp, args, n=n)           │             │
│ │    765                                                                                           │             │
│ │                                                                                                  │             │
│ │ E:\SD.Next_Dev\extensions\adetailer\scripts\!adetailer.py:725 in _postprocess_image_inner        │             │
│ │                                                                                                  │             │
│ │    724 │   │   │   try:                                                                          │             │
│ │ ❱  725 │   │   │   │   processed = process_images(p2)                                            │             │
│ │    726 │   │   │   except NansException as e:                                                    │             │
│ │                                                                                                  │             │
│ │ E:\SD.Next_Dev\modules\processing.py:187 in process_images                                       │             │
│ │                                                                                                  │             │
│ │   186 │   │   │   with context_hypertile_vae(p), context_hypertile_unet(p):                      │             │
│ │ ❱ 187 │   │   │   │   processed = process_images_inner(p)                                        │             │
│ │   188                                                                                            │             │
│ │                                                                                                  │             │
│ │ E:\SD.Next_Dev\modules\processing.py:297 in process_images_inner                                 │             │
│ │                                                                                                  │             │
│ │   296 │   │   │   │   │   from modules.processing_diffusers import process_diffusers             │             │
│ │ ❱ 297 │   │   │   │   │   x_samples_ddim = process_diffusers(p)                                  │             │
│ │   298 │   │   │   │   else:                                                                      │             │
│ │                                                                                                  │             │
│ │ E:\SD.Next_Dev\modules\processing_diffusers.py:441 in process_diffusers                          │             │
│ │                                                                                                  │             │
│ │   440 │   │   t0 = time.time()                                                                   │             │
│ │ ❱ 441 │   │   output = shared.sd_model(**base_args) # pylint: disable=not-callable               │             │
│ │   442 │   │   if isinstance(output, dict):                                                       │             │
│ │                                                                                                  │             │
│ │ E:\SD.Next_Dev\venv\lib\site-packages\torch\utils\_contextlib.py:115 in decorate_context         │             │
│ │                                                                                                  │             │
│ │   114 │   │   with ctx_factory():                                                                │             │
│ │ ❱ 115 │   │   │   return func(*args, **kwargs)                                                   │             │
│ │   116                                                                                            │             │
│ │                                                                                                  │             │
│ │ E:\SD.Next_Dev\venv\lib\site-packages\diffusers\pipelines\controlnet\pipeline_controlnet_inpaint │             │
│ │ .py:1295 in __call__                                                                             │             │
│ │                                                                                                  │             │
│ │   1294 │   │   # 1. Check inputs. Raise error if not correct                                     │             │
│ │ ❱ 1295 │   │   self.check_inputs(                                                                │             │
│ │   1296 │   │   │   prompt,                                                                       │             │
│ │                                                                                                  │             │
│ │ E:\SD.Next_Dev\venv\lib\site-packages\diffusers\pipelines\controlnet\pipeline_controlnet_inpaint │             │
│ │ .py:802 in check_inputs                                                                          │             │
│ │                                                                                                  │             │
│ │    801 │   │   ):                                                                                │             │
│ │ ❱  802 │   │   │   self.check_image(image, prompt, prompt_embeds)                                │             │
│ │    803 │   │   elif (                                                                            │             │
│ │                                                                                                  │             │
│ │ E:\SD.Next_Dev\venv\lib\site-packages\diffusers\pipelines\controlnet\pipeline_controlnet_inpaint │             │
│ │ .py:889 in check_image                                                                           │             │
│ │                                                                                                  │             │
│ │    888 │   │   ):                                                                                │             │
│ │ ❱  889 │   │   │   raise TypeError(                                                              │             │
│ │    890 │   │   │   │   f"image must be passed and be one of PIL image, numpy array, torch tenso  │             │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯             │
│ TypeError: image must be passed and be one of PIL image, numpy array, torch tensor, list of PIL images, list of  │
│ numpy arrays or list of torch tensors, but is <class 'NoneType'>                                                 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

sdnext-MASTER.log sdnext-DEV2.log

Screenshots

No response

Console logs, from start to end.

2024-02-16 23:58:37,121 | sd | INFO | launch | Starting SD.Next
2024-02-16 23:58:37,125 | sd | INFO | installer | Logger: file="E:\SD.Next_Dev\sdnext.log" level=DEBUG size=65 mode=create
2024-02-16 23:58:37,127 | sd | INFO | installer | Python 3.10.11 on Windows
2024-02-16 23:58:37,309 | sd | INFO | installer | Version: app=sd.next updated=2024-02-15 hash=d76136fb url=https://github.com/vladmandic/automatic/tree/HEAD
2024-02-16 23:58:38,211 | sd | INFO | installer | Latest published version: 3c952675fefd2c94b817940ffbd4cd94fd5876c9 2024-02-10T10:42:56Z
2024-02-16 23:58:38,223 | sd | INFO | launch | Platform: arch=AMD64 cpu=Intel64 Family 6 Model 158 Stepping 12, GenuineIntel system=Windows release=Windows-10-10.0.26058-SP0 python=3.10.11
2024-02-16 23:58:38,226 | sd | DEBUG | installer | Setting environment tuning
2024-02-16 23:58:38,227 | sd | DEBUG | installer | HF cache folder: C:\Users\ohiom\.cache\huggingface\hub
2024-02-16 23:58:38,228 | sd | DEBUG | installer | Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
2024-02-16 23:58:38,230 | sd | DEBUG | installer | Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
2024-02-16 23:58:38,235 | sd | INFO | installer | nVidia CUDA toolkit detected: nvidia-smi present
2024-02-16 23:58:38,353 | sd | DEBUG | installer | Repository update time: Fri Feb 16 00:03:17 2024
2024-02-16 23:58:38,354 | sd | INFO | launch | Startup: standard
2024-02-16 23:58:38,355 | sd | INFO | installer | Verifying requirements
2024-02-16 23:58:38,372 | sd | INFO | installer | Verifying packages
2024-02-16 23:58:38,374 | sd | INFO | installer | Verifying submodules
2024-02-16 23:58:44,129 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-chainner / main
2024-02-16 23:58:44,215 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-system-info / main
2024-02-16 23:58:44,300 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-agent-scheduler / main
2024-02-16 23:58:44,389 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-controlnet / main
2024-02-16 23:58:44,507 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main
2024-02-16 23:58:44,592 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
2024-02-16 23:58:44,680 | sd | DEBUG | installer | Submodule: modules/k-diffusion / master
2024-02-16 23:58:44,767 | sd | DEBUG | installer | Submodule: wiki / master
2024-02-16 23:58:44,814 | sd | DEBUG | paths | Register paths
2024-02-16 23:58:45,014 | sd | DEBUG | installer | Installed packages: 249
2024-02-16 23:58:45,015 | sd | DEBUG | installer | Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
2024-02-16 23:58:45,412 | sd | DEBUG | installer | Running extension installer: E:\SD.Next_Dev\extensions-builtin\sd-extension-system-info\install.py
2024-02-16 23:58:46,065 | sd | DEBUG | installer | Running extension installer: E:\SD.Next_Dev\extensions-builtin\sd-webui-agent-scheduler\install.py
2024-02-16 23:58:46,699 | sd | DEBUG | installer | Running extension installer: E:\SD.Next_Dev\extensions-builtin\sd-webui-controlnet\install.py
2024-02-16 23:58:47,340 | sd | DEBUG | installer | Running extension installer: E:\SD.Next_Dev\extensions-builtin\stable-diffusion-webui-images-browser\install.py
2024-02-16 23:58:47,984 | sd | DEBUG | installer | Running extension installer: E:\SD.Next_Dev\extensions-builtin\stable-diffusion-webui-rembg\install.py
2024-02-16 23:58:48,663 | sd | DEBUG | installer | Extensions all: ['adetailer']
2024-02-16 23:58:48,664 | sd | DEBUG | installer | Running extension installer: E:\SD.Next_Dev\extensions\adetailer\install.py
2024-02-16 23:58:49,374 | sd | INFO | installer | Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'adetailer']
2024-02-16 23:58:49,376 | sd | INFO | installer | Verifying requirements
2024-02-16 23:58:49,392 | sd | DEBUG | launch | Setup complete without errors: 1708127929
2024-02-16 23:58:49,401 | sd | INFO | installer | Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
2024-02-16 23:58:49,403 | sd | DEBUG | launch | Starting module: <module 'webui' from 'E:\\SD.Next_Dev\\webui.py'>
2024-02-16 23:58:49,404 | sd | INFO | launch | Command line args: ['--port', '7855', '--debug'] port=7855 debug=True
2024-02-16 23:58:49,406 | sd | DEBUG | launch | Env flags: []
2024-02-16 23:58:55,489 | sd | DEBUG | installer | Package not found: olive-ai
2024-02-16 23:58:56,792 | sd | INFO | loader | Load packages: {'torch': '2.2.0+cu121', 'diffusers': '0.26.3', 'gradio': '3.43.2'}
2024-02-16 23:58:57,886 | sd | DEBUG | shared | Read: file="config.json" json=41 bytes=2298 time=0.000
2024-02-16 23:58:57,890 | sd | INFO | shared | Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product" mode=no_grad
2024-02-16 23:58:57,972 | sd | INFO | shared | Device: device=NVIDIA GeForce GTX 1080 Ti n=1 arch=sm_90 cap=(6, 1) cuda=12.1 cudnn=8801 driver=551.52
2024-02-16 23:58:59,275 | sd | DEBUG | __init__ | ONNX: version=1.17.0 provider=CUDAExecutionProvider, available=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
2024-02-16 23:58:59,439 | sd | DEBUG | sd_hijack | Importing LDM
2024-02-16 23:58:59,463 | sd | DEBUG | webui | Entering start sequence
2024-02-16 23:58:59,467 | sd | DEBUG | webui | Initializing
2024-02-16 23:58:59,497 | sd | INFO | sd_vae | Available VAEs: path="E:\Stable_Diffusion_Data\models\VAE" items=4
2024-02-16 23:58:59,499 | sd | INFO | extensions | Disabled extensions: ['sd-webui-controlnet']
2024-02-16 23:58:59,502 | sd | DEBUG | modelloader | Scanning diffusers cache: ['E:\\Stable_Diffusion_Data\\models\\Diffusers'] items=3 time=0.00
2024-02-16 23:58:59,504 | sd | DEBUG | shared | Read: file="cache.json" json=2 bytes=399 time=0.001
2024-02-16 23:58:59,517 | sd | DEBUG | shared | Read: file="metadata.json" json=444 bytes=7081269 time=0.011
2024-02-16 23:58:59,530 | sd | INFO | sd_models | Available models: path="E:\Stable_Diffusion_Data\models\Stable-diffusion" items=128 time=0.03
2024-02-16 23:58:59,617 | sd | DEBUG | webui | Load extensions
2024-02-16 23:58:59,702 | sd | INFO | networks | LoRA networks: available=319 folders=2
2024-02-16 23:58:59,709 | sd | INFO | script_loading | Extension: script='extensions-builtin\Lora\scripts\lora_script.py' 23:58:59-702623 INFO     LoRA networks: available=319 folders=2
2024-02-16 23:59:00,214 | sd | INFO | script_loading | Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
2024-02-16 23:59:01,406 | sd | INFO | script_loading | Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized. version: 24.1.2, num models: 14
2024-02-16 23:59:01,419 | sd | INFO | webui | Extensions init time: 1.80 Lora=0.06 sd-extension-chainner=0.06 sd-webui-agent-scheduler=0.44 stable-diffusion-webui-images-browser=0.30 adetailer=0.89
2024-02-16 23:59:01,438 | sd | DEBUG | shared | Read: file="html/upscalers.json" json=4 bytes=2672 time=0.000
2024-02-16 23:59:01,440 | sd | DEBUG | shared | Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.000
2024-02-16 23:59:01,443 | sd | DEBUG | chainner_model | chaiNNer models: path="E:\Stable_Diffusion_Data\models\chaiNNer" defined=24 discovered=0 downloaded=24
2024-02-16 23:59:01,446 | sd | DEBUG | upscaler | Upscaler type=ESRGAN folder="E:\Stable_Diffusion_Data\models\ESRGAN" model="4x_foolhardy_Remacri" path="E:\Stable_Diffusion_Data\models\ESRGAN\4x_foolhardy_Remacri.pth"
2024-02-16 23:59:01,448 | sd | DEBUG | upscaler | Upscaler type=ESRGAN folder="E:\Stable_Diffusion_Data\models\ESRGAN" model="4x_NickelbackFS_72000_G" path="E:\Stable_Diffusion_Data\models\ESRGAN\4x_NickelbackFS_72000_G.pth"
2024-02-16 23:59:01,450 | sd | DEBUG | upscaler | Upscaler type=ESRGAN folder="E:\Stable_Diffusion_Data\models\ESRGAN" model="ESRGAN 4x CountryRoads" path="E:\Stable_Diffusion_Data\models\ESRGAN\ESRGAN 4x CountryRoads.pth"
2024-02-16 23:59:01,456 | sd | DEBUG | modelloader | Load upscalers: total=55 downloaded=48 user=3 time=0.03 ['None', 'Lanczos', 'Nearest', 'ChaiNNer', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
2024-02-16 23:59:01,485 | sd | DEBUG | styles | Load styles: folder="E:\Stable_Diffusion_Data\models\styles" items=308 time=0.03
2024-02-16 23:59:01,490 | sd | DEBUG | webui | Creating UI
2024-02-16 23:59:01,492 | sd | INFO | theme | UI theme: name="black-teal" style=Auto base=sdnext.css
2024-02-16 23:59:01,503 | sd | DEBUG | ui_txt2img | UI initialize: txt2img
2024-02-16 23:59:02,473 | sd | DEBUG | shared | Read: file="html\reference.json" json=36 bytes=19033 time=0.001
2024-02-16 23:59:03,100 | sd | DEBUG | ui_extra_networks | Extra networks: page='model' items=164 subfolders=2 tab=txt2img folders=['E:\\Stable_Diffusion_Data\\models\\Stable-diffusion', 'E:\\Stable_Diffusion_Data\\models\\Diffusers', 'models\\Reference'] list=1.52 thumb=0.50 desc=0.26 info=2.12 workers=4
2024-02-16 23:59:03,155 | sd | DEBUG | ui_extra_networks | Extra networks: page='style' items=308 subfolders=1 tab=txt2img folders=['E:\\Stable_Diffusion_Data\\models\\styles', 'html'] list=1.00 thumb=0.20 desc=0.00 info=0.00 workers=4
2024-02-16 23:59:03,163 | sd | DEBUG | ui_extra_networks | Extra networks: page='embedding' items=47 subfolders=0 tab=txt2img folders=['E:\\Stable_Diffusion_Data\\models\\embeddings'] list=1.31 thumb=0.34 desc=0.11 info=0.41 workers=4
2024-02-16 23:59:03,167 | sd | DEBUG | ui_extra_networks | Extra networks: page='hypernetwork' items=0 subfolders=0 tab=txt2img folders=['E:\\Stable_Diffusion_Data\\models\\hypernetworks'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
2024-02-16 23:59:03,171 | sd | DEBUG | ui_extra_networks | Extra networks: page='vae' items=4 subfolders=0 tab=txt2img folders=['E:\\Stable_Diffusion_Data\\models\\VAE'] list=0.08 thumb=0.03 desc=0.00 info=0.03 workers=4
2024-02-16 23:59:03,218 | sd | DEBUG | ui_extra_networks | Extra networks: page='lora' items=319 subfolders=11 tab=txt2img folders=['E:\\Stable_Diffusion_Data\\models\\Lora', 'E:\\Stable_Diffusion_Data\\models\\LyCORIS'] list=1.56 thumb=0.05 desc=0.36 info=3.23 workers=4
2024-02-16 23:59:03,335 | sd | DEBUG | ui_img2img | UI initialize: img2img
2024-02-16 23:59:03,591 | sd | DEBUG | ui_control_helpers | UI initialize: control models=E:\Stable_Diffusion_Data\models\control
2024-02-16 23:59:03,895 | sd | DEBUG | shared | Read: file="ui-config.json" json=53 bytes=2251 time=0.000
2024-02-16 23:59:03,998 | sd | DEBUG | theme | Themes: builtin=11 gradio=5 huggingface=0
2024-02-16 23:59:06,195 | sd | DEBUG | ui_extensions | Extension list: processed=337 installed=8 enabled=7 disabled=1 visible=337 hidden=0
2024-02-16 23:59:06,377 | sd | DEBUG | webui | Root paths: ['E:\\SD.Next_Dev']
2024-02-16 23:59:06,486 | sd | INFO | webui | Local URL: http://127.0.0.1:7855/
2024-02-16 23:59:06,488 | sd | DEBUG | webui | Gradio functions: registered=2382
2024-02-16 23:59:06,490 | sd | INFO | middleware | Initializing middleware
2024-02-16 23:59:06,495 | sd | DEBUG | webui | Creating API
2024-02-16 23:59:06,694 | sd | INFO | task_runner | [AgentScheduler] Task queue is empty
2024-02-16 23:59:06,696 | sd | INFO | api | [AgentScheduler] Registering APIs
2024-02-16 23:59:06,863 | sd | DEBUG | webui | Scripts setup: ['IP Adapters:0.018', 'AnimateDiff:0.011', 'ADetailer:0.08', 'Prompt Matrix:0.005', 'X/Y/Z Grid:0.014', 'Face:0.016', 'Stable Video Diffusion:0.006']
2024-02-16 23:59:06,865 | sd | DEBUG | sd_models | Model metadata: file="metadata.json" no changes
2024-02-16 23:59:06,866 | sd | DEBUG | webui | Model auto load disabled
2024-02-16 23:59:06,868 | sd | DEBUG | shared | Save: file="config.json" json=41 bytes=2238 time=0.001
2024-02-16 23:59:06,870 | sd | DEBUG | script_callbacks | Script callback init time: image_browser.py:ui_tabs=0.46 system-info.py:app_started=0.08 task_scheduler.py:app_started=0.19
2024-02-16 23:59:06,872 | sd | INFO | webui | Startup time: 17.46 torch=6.00 olive=0.09 gradio=1.30 libraries=2.65 extensions=1.80 face-restore=0.08 ui-en=1.95 ui-txt2img=0.09 ui-img2img=0.10 ui-control=0.13 ui-settings=0.25 ui-extensions=2.09 ui-defaults=0.09 launch=0.18 api=0.10 app-started=0.27
2024-02-17 00:00:00,125 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=1 uptime=63 memory=1.26/63.92 backend=Backend.DIFFUSERS state=idle
2024-02-17 00:00:53,589 | sd | INFO | server | MOTD: N/A
2024-02-17 00:00:58,018 | sd | DEBUG | theme | Themes: builtin=11 gradio=5 huggingface=0
2024-02-17 00:00:58,358 | sd | INFO | api | Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 Edg/122.0.0.0
2024-02-17 00:01:09,756 | sd | DEBUG | generation_parameters_copypaste | Paste prompt: type="params" prompt="(portrait of a tall, elegant, mature, 38-year-old female with a curvy build), fit BREAK long blonde hair, BREAK blue eyes, full lips, plump lips, pouty lips,  BREAK  elegant clothing, Baby Yellow white shirt, Burlap Brown pencil skirt, (pantyhose:1.1), (high heels:1.1), dramatic lighting, high resolution, (portrait photography), realistic, detailed, ((professional photograph)) <lora:SD1.5 General Utility - Perfect Eyes:0.6>
Negative prompt: no text, poorly drawn hands, poorly drawn feet, poorly drawn face, bad anatomy, low quality, beginner, amateur, distorted face, bad quality, SD15_Negative_badhands5 SD15_Negative_FastNegativeV2 SD15_Negative_Dream_BadDream Asian-Less Negative
Steps: 20, Seed: 512088241, Sampler: Euler a, CFG scale: 6, Size: 512x768, Parser: Full parser, Model: SD1.5 Dreamshaper V8, Model hash: 879db523c3, VAE: vae-ft-mse-840000-ema-pruned, Backend: Diffusers, App: SD.Next, Version: d76136f, Operations: control; txt2img; img2img, Init image size: 512x768, Init image hash: 05f08ae7, Resize scale: 0.4, Denoising strength: 0.5, Resize mode: Fixed, Control model: OpenPose, Control conditioning: 1.0, Control mode: ControlNet, Control resize: Nearest, Control process: ['OpenPose'], ADetailer model: face_yolov8s.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer version: 24.1.2, Lora hashes: "SD1.5 General Utility - Perfect Eyes: e6cbc754", Sampler options: , Pipeline: StableDiffusionControlNetInpaintPipeline, Embeddings: "SD15_Negative_FastNegativeV2, Asian-Less Negative, SD15_Negative_Dream_BadDream, SD15_Negative_badhands5""
2024-02-17 00:01:09,763 | sd | DEBUG | generation_parameters_copypaste | Settings overrides: []
2024-02-17 00:01:11,144 | sd | DEBUG | modeldata | Model requested: fn=update_token_counter
2024-02-17 00:01:11,146 | sd | INFO | sd_models | Select: model="SD1.5 Dreamshaper V8 [879db523c3]"
2024-02-17 00:01:11,148 | sd | DEBUG | sd_models | Load model: existing=False target=E:\Stable_Diffusion_Data\models\Stable-diffusion\SD1.5 Dreamshaper V8.safetensors info=None
2024-02-17 00:01:11,181 | sd | DEBUG | devices | Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False
2024-02-17 00:01:11,183 | sd | INFO | devices | Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=no_grad fp16=True bf16=None
2024-02-17 00:01:11,185 | sd | INFO | sd_vae | Loading VAE: model=E:\Stable_Diffusion_Data\models\VAE\vae-ft-mse-840000-ema-pruned.safetensors source=settings
2024-02-17 00:01:11,186 | sd | DEBUG | sd_vae | Diffusers VAE load config: {'low_cpu_mem_usage': False, 'torch_dtype': torch.float16, 'use_safetensors': True, 'variant': 'fp16'}
2024-02-17 00:01:11,189 | sd | INFO | sd_models | Autodetect: vae="Stable Diffusion" class=StableDiffusionPipeline file="E:\Stable_Diffusion_Data\models\Stable-diffusion\SD1.5 Dreamshaper V8.safetensors" size=2034MB
2024-02-17 00:01:11,349 | sd | DEBUG | sd_models | Diffusers loading: path="E:\Stable_Diffusion_Data\models\Stable-diffusion\SD1.5 Dreamshaper V8.safetensors"
2024-02-17 00:01:11,351 | sd | INFO | sd_models | Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline file="E:\Stable_Diffusion_Data\models\Stable-diffusion\SD1.5 Dreamshaper V8.safetensors" size=2034MB
2024-02-17 00:01:13,316 | sd | DEBUG | sd_models | Setting model: pipeline=StableDiffusionPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'extract_ema': True, 'use_safetensors': True}
2024-02-17 00:01:13,320 | sd | DEBUG | sd_models | Setting model VAE: name=vae-ft-mse-840000-ema-pruned.safetensors
2024-02-17 00:01:13,323 | sd | DEBUG | sd_models | Setting model: enable VAE slicing
2024-02-17 00:01:14,666 | sd | INFO | textual_inversion | Load embeddings: loaded=29 skipped=18 time=0.32
2024-02-17 00:01:15,094 | sd | DEBUG | devices | GC: collected=3820 device=cuda {'ram': {'used': 2.17, 'total': 63.92}, 'gpu': {'used': 3.16, 'total': 11.0}, 'retries': 0, 'oom': 0} time=0.43
2024-02-17 00:01:15,100 | sd | INFO | sd_models | Load model: time=3.52 load=3.52 native=512 {'ram': {'used': 2.17, 'total': 63.92}, 'gpu': {'used': 3.16, 'total': 11.0}, 'retries': 0, 'oom': 0}
2024-02-17 00:01:23,481 | sd | DEBUG | ui_control_helpers | Control input: type=PIL.Image input=[<PIL.Image.Image image mode=RGB size=1280x1920 at 0x1AC90779030>]
2024-02-17 00:01:29,765 | sd | DEBUG | processors | Control Processor loading: id="OpenPose" class=OpenposeDetector
2024-02-17 00:01:31,264 | sd | DEBUG | processors | Control Processor loaded: id="OpenPose" class=OpenposeDetector time=1.50
2024-02-17 00:01:33,084 | sd | DEBUG | controlnet | Control ControlNet model loading: id="OpenPose" path="lllyasviel/control_v11p_sd15_openpose"
2024-02-17 00:01:34,303 | sd | DEBUG | controlnet | Control ControlNet model loaded: id="OpenPose" path="lllyasviel/control_v11p_sd15_openpose" time=1.22
2024-02-17 00:01:43,218 | sd | DEBUG | ui_control_helpers | Control input: type=PIL.Image input=[<PIL.Image.Image image mode=RGB size=1280x1920 at 0x1AC90B15360>]
2024-02-17 00:01:43,601 | sd | DEBUG | run | Control ControlNet unit: i=1 process=OpenPose model=OpenPose strength=1.0 guess=False start=0 end=1
2024-02-17 00:01:43,631 | sd | DEBUG | controlnet | Control ControlNet pipeline: class=StableDiffusionControlNetPipeline time=0.03
2024-02-17 00:01:43,666 | sd | DEBUG | sd_models | Setting model: enable VAE slicing
2024-02-17 00:01:43,674 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=RGB size=1280x1920 at 0x1AC90B15360> mode=1 target=512x768 upscaler=Nearest function=control_run
2024-02-17 00:01:45,237 | sd | DEBUG | processors | Control Processor: id="OpenPose" mode=RGB args={'include_body': True, 'include_hand': False, 'include_face': False} time=1.54
2024-02-17 00:01:45,788 | sd | INFO | extra_networks_lora | LoRA apply: ['SD1.5 General Utility - Perfect Eyes'] patch=0.00 load=0.53
2024-02-17 00:01:46,398 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionControlNetPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 308, 768]), 'negative_prompt_embeds': torch.Size([1, 308, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 20, 'eta': 1.0, 'width': 512, 'height': 768, 'controlnet_conditioning_scale': 1.0, 'control_guidance_start': 0.0, 'control_guidance_end': 1.0, 'guess_mode': False, 'image': <class 'list'>, 'parser': 'Full parser'}
2024-02-17 00:01:46,447 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'rescale_betas_zero_snr': False}
2024-02-17 00:01:53,022 | sd | DEBUG | sd_vae_taesd | VAE load: type=taesd model=E:\Stable_Diffusion_Data\models\TAESD\taesd_decoder.pth
2024-02-17 00:01:59,843 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=220 uptime=182 memory=3.9/63.92 backend=Backend.DIFFUSERS state=idle
2024-02-17 00:02:13,052 | sd | DEBUG | sd_models | Pipeline class change: original=StableDiffusionControlNetPipeline target=StableDiffusionControlNetInpaintPipeline
2024-02-17 00:02:13,057 | sd | DEBUG | masking | Mask: size=512x768 masked=6952px area=0.02 auto=None blur=0.031 erode=0.25 dilate=0.01 type=Grayscale time=0.00
2024-02-17 00:02:13,065 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=L size=136x204 at 0x1AC93157CD0> mode=2 target=512x768 upscaler=None function=init
2024-02-17 00:02:13,077 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=RGB size=136x204 at 0x1AC92FDF6A0> mode=3 target=512x768 upscaler=None function=init
2024-02-17 00:02:13,084 | sd | INFO | extra_networks_lora | LoRA apply: ['SD1.5 General Utility - Perfect Eyes'] patch=0.00 load=0.00
2024-02-17 00:02:13,427 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionControlNetInpaintPipeline task=DiffusersTaskType.INPAINTING set={'prompt_embeds': torch.Size([1, 308, 768]), 'negative_prompt_embeds': torch.Size([1, 308, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'image': <class 'list'>, 'mask_image': <class 'PIL.Image.Image'>, 'strength': 0.4, 'height': 768, 'width': 512, 'parser': 'Full parser'}
2024-02-17 00:02:13,437 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'rescale_betas_zero_snr': False}
2024-02-17 00:02:13,935 | sd | ERROR | errors | Running script postprocess image: extensions\adetailer\scripts\!adetailer.py: TypeError
2024-02-17 00:02:14,023 | sd | INFO | images | Saving: image="E:\Stable_Diffusion_Data\outputs_SD_NEXT\control\2024-02-17\SD1.5 Dreamshaper V8\-00000-SD1.5 Dreamshaper V8-Euler a.png" type=PNG resolution=512x768 size=0
2024-02-17 00:02:14,027 | sd | INFO | images | Saving: text="E:\Stable_Diffusion_Data\outputs_SD_NEXT\control\2024-02-17\SD1.5 Dreamshaper V8\-00000-SD1.5 Dreamshaper V8-Euler a.txt" len=1709
2024-02-17 00:02:14,200 | sd | INFO | images | Saving: json="E:\Stable_Diffusion_Data\outputs_SD_NEXT\Outputs.json" records=159
2024-02-17 00:02:14,202 | sd | INFO | processing | Processed: images=1 time=28.95 its=0.69 memory={'ram': {'used': 4.07, 'total': 63.92}, 'gpu': {'used': 4.55, 'total': 11.0}, 'retries': 0, 'oom': 0}
2024-02-17 00:02:14,417 | sd | INFO | run | Control: pipeline units=1 process=1 time=30.61 init=0.03 proc=1.62 ctrl=28.96 outputs=1
2024-02-17 00:03:33,179 | sd | INFO | webui | Exiting

List of installed extensions

Adetailer

IowaSovereign commented 7 months ago

Fixed by vladmandic in the dev commit 2a10875