vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.56k stars 409 forks source link

Inpainting returning typeerror after dev update #2983

Closed MysticDaedra closed 6 months ago

MysticDaedra commented 6 months ago

Issue Description

EDIT: As per comment below, this actually appears to be a bug with inpainting, not adetailer.

I'm getting a traceback error with today's dev update. Didn't have this problem yesterday, which I think means the update did something. Anyways, here's the log:

12:42:50-282391 INFO     Verifying requirements
12:42:50-291517 INFO     Updating Wiki
12:42:50-358257 DEBUG    Submodule: D:\automatic\wiki / master
12:42:50-978269 DEBUG    Setup complete without errors: 1710704571
12:42:50-987727 DEBUG    Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
12:42:50-988867 DEBUG    Starting module: <module 'webui' from 'D:\\automatic\\webui.py'>
12:42:50-989868 INFO     Command line args: ['--debug', '--upgrade', '--share'] share=True upgrade=True debug=True
12:42:50-990868 DEBUG    Env flags: ['SD_MASK_DEBUG=true']
12:42:58-953093 INFO     Load packages: {'torch': '2.2.0+cu121', 'diffusers': '0.27.0', 'gradio': '3.43.2'}
12:43:01-627763 DEBUG    Read: file="config.json" json=67 bytes=4112 time=0.000
12:43:01-628425 DEBUG    Unknown settings: ['cross_attention_options', 'ad_max_models', 'civitai_link_key',
                         'multiple_tqdm', 'ad_same_seed_for_each_tap', 'mudd_states', 'civitai_folder_lyco',
                         'diffusers_aesthetics_score', 'image_browser_active_tabs', 'ad_extra_models_dir',
                         'canvas_zoom_mask_clear', 'canvas_zoom_draw_staight_lines']
12:43:01-630918 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product"
                         mode=no_grad
12:43:01-695317 INFO     Device: device=NVIDIA GeForce RTX 3070 n=1 arch=sm_90 cap=(8, 6) cuda=12.1 cudnn=8801
                         driver=551.61
12:43:01-702122 DEBUG    Read: file="html\reference.json" json=36 bytes=21493 time=0.005
12:43:03-621555 TRACE    Trace: MASK
12:43:04-006824 DEBUG    ONNX: version=1.17.0 provider=CUDAExecutionProvider, available=['TensorrtExecutionProvider',
                         'CUDAExecutionProvider', 'CPUExecutionProvider']
12:43:04-155795 DEBUG    Importing LDM
12:43:04-185990 DEBUG    Entering start sequence
12:43:04-188445 DEBUG    Initializing
12:43:04-233240 INFO     Available VAEs: path="D:\Stable Diffusion Files\Models\VAE" items=2
12:43:04-235794 INFO     Disabled extensions: ['sd-webui-controlnet']
12:43:04-238265 DEBUG    Scanning diffusers cache: folder=D:\Stable Diffusion Files\Models\Diffusers items=3 time=0.00
12:43:04-250981 DEBUG    Read: file="cache.json" json=2 bytes=129039 time=0.011
12:43:04-265025 DEBUG    Read: file="metadata.json" json=554 bytes=1315841 time=0.012
12:43:04-272034 INFO     Available models: path="D:\Stable Diffusion Files\Models\Checkpoints" items=19 time=0.04
12:43:04-374816 DEBUG    Load extensions
12:43:04-443399 INFO     Extension: script='extensions-builtin\Lora\scripts\lora_script.py'
                         [2;36m12:43:04-439398[0m[2;36m [0m[34mINFO    [0m LoRA networks: [33mavailable[0m=[1;36m44[0m
                         [33mfolders[0m=[1;36m2[0m
12:43:04-990918 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
face_yolov8n.pt: 100%|████████████████████████████████████████████████████████████| 6.23M/6.23M [00:00<00:00, 25.4MB/s]
face_yolov8s.pt: 100%|████████████████████████████████████████████████████████████| 22.5M/22.5M [00:00<00:00, 28.1MB/s]
hand_yolov8n.pt: 100%|████████████████████████████████████████████████████████████| 6.24M/6.24M [00:00<00:00, 27.5MB/s]
person_yolov8n-seg.pt: 100%|██████████████████████████████████████████████████████| 6.78M/6.78M [00:00<00:00, 29.5MB/s]
person_yolov8s-seg.pt: 100%|██████████████████████████████████████████████████████| 23.9M/23.9M [00:00<00:00, 29.6MB/s]
12:43:08-839207 INFO     Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized.
                         version: 24.3.1, num models: 17
12:43:09-106268 ERROR    Module load: extensions\sd_civitai_extension\scripts\api.py: ImportError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\automatic\modules\script_loading.py:29 in load_module                                                             │
│                                                                                                                      │
│   28 │   │   │   │   with contextlib.redirect_stdout(io.StringIO()) as stdout:                                       │
│ ❱ 29 │   │   │   │   │   module_spec.loader.exec_module(module)                                                      │
│   30 │   │   │   setup_logging() # reset since scripts can hijaack logging                                           │
│ in exec_module:883                                                                                                   │
│ in _call_with_frames_removed:241                                                                                     │
│                                                                                                                      │
│ D:\automatic\extensions\sd_civitai_extension\scripts\api.py:4 in <module>                                            │
│                                                                                                                      │
│    3 from fastapi import FastAPI                                                                                     │
│ ❱  4 from scripts.link import reconnect_to_civitai,get_link_status                                                   │
│    5 from modules import script_callbacks as script_callbacks                                                        │
│                                                                                                                      │
│ D:\automatic\extensions\sd_civitai_extension\scripts\link.py:3 in <module>                                           │
│                                                                                                                      │
│    2                                                                                                                 │
│ ❱  3 import civitai.link as link                                                                                     │
│    4                                                                                                                 │
│                                                                                                                      │
│ D:\automatic\extensions\sd_civitai_extension\civitai\link.py:7 in <module>                                           │
│                                                                                                                      │
│     6 import civitai.lib as civitai                                                                                  │
│ ❱   7 import civitai.generation as generation                                                                        │
│     8 from civitai.models import Command, CommandActivitiesList, CommandImageTxt2Img, CommandResourcesAdd, CommandAc │
│                                                                                                                      │
│ D:\automatic\extensions\sd_civitai_extension\civitai\generation.py:4 in <module>                                     │
│                                                                                                                      │
│    3 from modules.api.api import encode_pil_to_base64, validate_sampler_name                                         │
│ ❱  4 from modules.api.models import StableDiffusionTxt2ImgProcessingAPI, TextToImageResponse                         │
│    5 from modules.processing import StableDiffusionProcessingTxt2Img, process_images                                 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'TextToImageResponse' from 'modules.api.models' (D:\automatic\modules\api\models.py)
12:43:09-163685 ERROR    Module load: extensions\sd_civitai_extension\scripts\link.py: ImportError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\automatic\modules\script_loading.py:29 in load_module                                                             │
│                                                                                                                      │
│   28 │   │   │   │   with contextlib.redirect_stdout(io.StringIO()) as stdout:                                       │
│ ❱ 29 │   │   │   │   │   module_spec.loader.exec_module(module)                                                      │
│   30 │   │   │   setup_logging() # reset since scripts can hijaack logging                                           │
│ in exec_module:883                                                                                                   │
│ in _call_with_frames_removed:241                                                                                     │
│                                                                                                                      │
│ D:\automatic\extensions\sd_civitai_extension\scripts\link.py:3 in <module>                                           │
│                                                                                                                      │
│    2                                                                                                                 │
│ ❱  3 import civitai.link as link                                                                                     │
│    4                                                                                                                 │
│                                                                                                                      │
│ D:\automatic\extensions\sd_civitai_extension\civitai\link.py:7 in <module>                                           │
│                                                                                                                      │
│     6 import civitai.lib as civitai                                                                                  │
│ ❱   7 import civitai.generation as generation                                                                        │
│     8 from civitai.models import Command, CommandActivitiesList, CommandImageTxt2Img, CommandResourcesAdd, CommandAc │
│                                                                                                                      │
│ D:\automatic\extensions\sd_civitai_extension\civitai\generation.py:4 in <module>                                     │
│                                                                                                                      │
│    3 from modules.api.api import encode_pil_to_base64, validate_sampler_name                                         │
│ ❱  4 from modules.api.models import StableDiffusionTxt2ImgProcessingAPI, TextToImageResponse                         │
│    5 from modules.processing import StableDiffusionProcessingTxt2Img, process_images                                 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'TextToImageResponse' from 'modules.api.models' (D:\automatic\modules\api\models.py)
12:43:09-174105 ERROR    Module load: extensions\sd_civitai_extension\scripts\settings.py: ImportError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\automatic\modules\script_loading.py:29 in load_module                                                             │
│                                                                                                                      │
│   28 │   │   │   │   with contextlib.redirect_stdout(io.StringIO()) as stdout:                                       │
│ ❱ 29 │   │   │   │   │   module_spec.loader.exec_module(module)                                                      │
│   30 │   │   │   setup_logging() # reset since scripts can hijaack logging                                           │
│ in exec_module:883                                                                                                   │
│ in _call_with_frames_removed:241                                                                                     │
│                                                                                                                      │
│ D:\automatic\extensions\sd_civitai_extension\scripts\settings.py:1 in <module>                                       │
│                                                                                                                      │
│ ❱  1 from civitai.link import on_civitai_link_key_changed                                                            │
│    2 from modules import shared, script_callbacks                                                                    │
│                                                                                                                      │
│ D:\automatic\extensions\sd_civitai_extension\civitai\link.py:7 in <module>                                           │
│                                                                                                                      │
│     6 import civitai.lib as civitai                                                                                  │
│ ❱   7 import civitai.generation as generation                                                                        │
│     8 from civitai.models import Command, CommandActivitiesList, CommandImageTxt2Img, CommandResourcesAdd, CommandAc │
│                                                                                                                      │
│ D:\automatic\extensions\sd_civitai_extension\civitai\generation.py:4 in <module>                                     │
│                                                                                                                      │
│    3 from modules.api.api import encode_pil_to_base64, validate_sampler_name                                         │
│ ❱  4 from modules.api.models import StableDiffusionTxt2ImgProcessingAPI, TextToImageResponse                         │
│    5 from modules.processing import StableDiffusionProcessingTxt2Img, process_images                                 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
ImportError: cannot import name 'TextToImageResponse' from 'modules.api.models' (D:\automatic\modules\api\models.py)
12:43:09-197723 DEBUG    Extensions init time: 4.82 sd-extension-chainner=0.05 sd-webui-agent-scheduler=0.49
                         stable-diffusion-webui-images-browser=0.58 stable-diffusion-webui-rembg=0.16 adetailer=3.11
                         sd_civitai_extension=0.30
12:43:09-219743 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.005
12:43:09-225246 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.004
12:43:09-228163 DEBUG    chaiNNer models: path="D:\Stable Diffusion Files\Models\chaiNNer" defined=24 discovered=5
                         downloaded=9
12:43:09-229898 DEBUG    Upscaler type=ESRGAN folder="D:\Stable Diffusion Files\Models\ESRGAN"
                         model="4x_foolhardy_Remacri" path="D:\Stable Diffusion
                         Files\Models\ESRGAN\4x_foolhardy_Remacri.pth"
12:43:09-230899 DEBUG    Upscaler type=ESRGAN folder="D:\Stable Diffusion Files\Models\ESRGAN" model="4x_NMKD-Siax_200k"
                         path="D:\Stable Diffusion Files\Models\ESRGAN\4x_NMKD-Siax_200k.pth"
12:43:09-232899 DEBUG    Upscaler type=SwinIR folder="D:\Stable Diffusion Files\Models\SwinIR" model="SwinIR_4x"
                         path="D:\Stable Diffusion Files\Models\SwinIR\SwinIR_4x.pth"
12:43:09-235838 DEBUG    Load upscalers: total=60 downloaded=22 user=8 time=0.03 ['None', 'Lanczos', 'Nearest',
                         'ChaiNNer', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
12:43:09-664015 DEBUG    Load styles: folder="D:\Stable Diffusion Files\Models\Styles" items=297 time=0.43
12:43:09-670076 DEBUG    Creating UI
12:43:09-671078 INFO     UI theme: name="invoked" style=Auto base=sdnext.css
12:43:09-681170 DEBUG    UI initialize: txt2img
12:43:09-767587 DEBUG    Extra networks: page='model' items=55 subfolders=3 tab=txt2img folders=['D:\\Stable Diffusion
                         Files\\Models\\Checkpoints', 'D:\\Stable Diffusion Files\\Models\\Diffusers',
                         'models\\Reference'] list=0.03 thumb=0.01 desc=0.00 info=0.06 workers=4
12:43:09-778640 WARNING  Extra network removing invalid image: D:\Stable Diffusion
                         Files\Models\Checkpoints\animatelcmSVDXtForOpen_v10.preview.png
12:43:09-792651 DEBUG    Extra networks: page='style' items=297 subfolders=1 tab=txt2img folders=['D:\\Stable Diffusion
                         Files\\Models\\Styles', 'html'] list=0.03 thumb=0.00 desc=0.00 info=0.00 workers=4
12:43:09-795154 DEBUG    Extra networks: page='embedding' items=13 subfolders=0 tab=txt2img folders=['D:\\Stable
                         Diffusion Files\\Models\\Embeddings'] list=0.05 thumb=0.00 desc=0.00 info=0.04 workers=4
12:43:09-798160 DEBUG    Extra networks: page='hypernetwork' items=0 subfolders=0 tab=txt2img folders=['D:\\Stable
                         Diffusion Files\\Models\\Hypernetworks'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
12:43:09-799664 DEBUG    Extra networks: page='vae' items=2 subfolders=0 tab=txt2img folders=['D:\\Stable Diffusion
                         Files\\Models\\VAE'] list=0.01 thumb=0.00 desc=0.00 info=0.00 workers=4
12:43:09-805167 DEBUG    Extra networks: page='lora' items=44 subfolders=0 tab=txt2img folders=['D:\\Stable Diffusion
                         Files\\Models\\Loras', 'D:\\Stable Diffusion Files\\Models\\LyCORIS'] list=0.07 thumb=0.00
                         desc=0.01 info=0.18 workers=4
12:43:09-957233 DEBUG    UI initialize: img2img
12:43:10-145338 DEBUG    UI initialize: control models=D:\Stable Diffusion Files\Models\Control
12:43:10-456127 DEBUG    Read: file="ui-config.json" json=129 bytes=8462 time=0.005
12:43:10-560509 DEBUG    Themes: builtin=12 gradio=5 huggingface=55
12:43:15-231489 DEBUG    Extension list: processed=343 installed=13 enabled=12 disabled=1 visible=343 hidden=0
12:43:15-403485 DEBUG    Root paths: ['D:\\automatic']
12:43:22-529121 INFO     Local URL: http://127.0.0.1:7860/
12:43:22-530122 INFO     Share URL: https://ad094940fc092c5e0b.gradio.live
12:43:22-531122 DEBUG    Gradio functions: registered=3572
12:43:22-533705 DEBUG    FastAPI middleware: ['Middleware', 'Middleware']
12:43:22-536210 DEBUG    Creating API
12:43:22-707234 INFO     [AgentScheduler] Task queue is empty
12:43:22-710747 INFO     [AgentScheduler] Registering APIs
Civitai: Check resources for missing info files
Civitai: Check resources for missing preview images
12:43:23-164207 DEBUG    Scripts setup: ['IP Adapters:0.015', 'AnimateDiff:0.008', 'ADetailer:0.248', 'X/Y/Z Grid:0.01',
                         'Face:0.012', 'Image-to-Video:0.006', 'Stable Video Diffusion:0.005', 'Ultimate SD
                         upscale:0.007']
12:43:23-166212 DEBUG    Model metadata: file="metadata.json" no changes
12:43:23-168277 DEBUG    Model requested: fn=<lambda>
12:43:23-169277 INFO     Select: model="lightningFusionXL_v14 [fe5ad21d7f]"
12:43:23-170278 DEBUG    Load model: existing=False target=D:\Stable Diffusion
                         Files\Models\Checkpoints\lightningFusionXL_v14.safetensors info=None
12:43:23-210869 DEBUG    Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False
12:43:23-211869 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16
                         context=inference_mode fp16=True bf16=None optimization=Scaled-Dot-Product
12:43:23-212869 DEBUG    Diffusers loading: path="D:\Stable Diffusion
                         Files\Models\Checkpoints\lightningFusionXL_v14.safetensors"
12:43:23-213869 INFO     Autodetect: model="Stable Diffusion XL" class=StableDiffusionXLPipeline file="D:\Stable
                         Diffusion Files\Models\Checkpoints\lightningFusionXL_v14.safetensors" size=6617MB
Civitai: Found 9 resources missing preview images
Civitai: Found 6 resources missing info files
Civitai: No info found on Civitai
Civitai: Found 1 hash matches
Civitai: Downloading: "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/978c168d-3058-4d7c-a9df-699c71ecfb8b/width=450/6811486.jpeg" to D:\Stable Diffusion Files\Models\Checkpoints\animatelcmSVDXtForOpen_v10.preview.png

100%|███████████████████████████████████████████████████████████████████████████████| 582k/582k [00:00<00:00, 11.9MB/s]
Civitai: Updated 1 preview images
12:43:25-098562 DEBUG    Setting model: pipeline=StableDiffusionXLPipeline config={'low_cpu_mem_usage': True,
                         'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'variant': 'fp16',
                         'extract_ema': True, 'original_config_file': 'configs/sd_xl_base.yaml', 'use_safetensors':
                         True}
12:43:25-100437 DEBUG    Setting model: enable model CPU offload
12:43:25-113495 DEBUG    Setting model: enable VAE slicing
12:43:25-113495 DEBUG    Setting model: enable VAE tiling
12:43:28-465328 INFO     Load embeddings: loaded=13 skipped=0 time=3.34
12:43:28-728606 DEBUG    GC: collected=5075 device=cuda {'ram': {'used': 1.41, 'total': 31.9}, 'gpu': {'used': 1.09,
                         'total': 8.0}, 'retries': 0, 'oom': 0} time=0.26
12:43:28-736622 INFO     Load model: time=5.29 load=5.29 native=1024 {'ram': {'used': 1.41, 'total': 31.9}, 'gpu':
                         {'used': 1.09, 'total': 8.0}, 'retries': 0, 'oom': 0}
12:43:28-739623 DEBUG    Script callback init time: image_browser.py:ui_tabs=2.75 system-info.py:app_started=0.07
                         task_scheduler.py:app_started=0.23 iib_setup.py:app_started=0.25
12:43:28-740624 INFO     Startup time: 37.74 torch=6.98 olive=0.07 gradio=0.91 libraries=5.20 extensions=4.82
                         face-restore=0.10 networks=0.43 ui-en=0.23 ui-txt2img=0.13 ui-img2img=0.15 ui-control=0.18
                         ui-settings=0.42 ui-extensions=4.39 ui-defaults=0.09 launch=7.19 api=0.07 app-started=0.55
                         checkpoint=5.57
12:43:28-742624 DEBUG    Save: file="config.json" json=67 bytes=3996 time=0.003
12:43:28-745127 DEBUG    Unused settings: ['cross_attention_options', 'civitai_link_key', 'multiple_tqdm',
                         'mudd_states', 'civitai_folder_lyco', 'diffusers_aesthetics_score']
12:44:00-072564 DEBUG    Server: alive=True jobs=1 requests=7 uptime=60 memory=1.41/31.9 backend=Backend.DIFFUSERS
                         state=idle
12:45:07-295157 INFO     Applying hypertile: unet=448
Loading model: D:\Stable Diffusion Files\Models\Loras\cla1re3 (20).safetensors ━━━━━━━━━━━━━━━━━━ 170.6/170.6 MB 0:00:00
12:45:08-132784 INFO     LoRA apply: ['cla1re3 (20)'] patch=0.00 load=0.82
12:45:08-146174 INFO     Base: class=StableDiffusionXLPipeline
12:45:13-269053 DEBUG    Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE
                         set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]),
                         'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds':
                         torch.Size([1, 1280]), 'guidance_scale': 1, 'generator': device(type='cuda'),
                         'num_inference_steps': 8, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_end': None,
                         'output_type': 'latent', 'width': 896, 'height': 1024, 'parser': 'Full parser'}
12:45:13-312226 DEBUG    Sampler: sampler="DDPM" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end':
                         0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'variance_type':
                         'fixed_small', 'clip_sample': False, 'thresholding': False, 'clip_sample_range': 1.0,
                         'sample_max_value': 1.0, 'timestep_spacing': 'leading', 'rescale_betas_zero_snr': True}
Progress  4.27s/it █████████████▏                       38% 3/8 00:18 00:21 Base12:45:32-148930 DEBUG    VAE load: type=approximate model=D:\automatic\models\VAE-approx\model.pt
Progress  2.50s/it ███████████████████████████████████ 100% 8/8 00:19 00:00 Base

0: 640x576 1 face, 605.5ms
Speed: 23.3ms preprocess, 605.5ms inference, 111.2ms postprocess per image at shape (1, 3, 640, 576)
12:45:46-790011 INFO     Applying hypertile: unet=448
12:45:46-800417 DEBUG    Pipeline class change: original=StableDiffusionXLPipeline
                         target=StableDiffusionXLInpaintPipeline device=cpu fn=init
12:45:46-802418 TRACE    Run mask: fn=init
12:45:46-808987 TRACE    Mask args legacy: blur=4 padding=32
12:45:46-811987 TRACE    Mask shape=(1024, 896) opts=namespace(model=None, auto_mask='None', mask_only=False,
                         mask_blur=0.018, mask_erode=0.01, mask_dilate=0.14285714285714285, seg_iou_thresh=0.5,
                         seg_score_thresh=0.5, seg_nms_thresh=0.5, seg_overlap_ratio=0.3, seg_points_per_batch=64,
                         seg_topK=50, seg_colormap='pink', preview_type='Composite', seg_live=True, weight_original=0.5,
                         weight_mask=0.5, kernel_iterations=1, invert=False)
12:45:46-814989 TRACE    Mask erode=0.010 kernel=(3, 3) mask=(1024, 896)
12:45:46-817180 TRACE    Mask dilate=0.143 kernel=(33, 33) mask=(1024, 896)
12:45:46-830914 TRACE    Mask blur=0.018 x=5 y=5 mask=(1024, 896)
12:45:46-831914 DEBUG    Mask: size=896x1024 masked=33892px area=0.04 auto=None blur=0.018 erode=0.01
                         dilate=0.14285714285714285 type=Grayscale time=0.03
12:45:46-841415 TRACE    Mask crop: mask=(896, 1024) region=(322, 25, 558, 288) pad=32
12:45:46-842415 TRACE    Mask expand: image=(896, 1024) processing=(896, 1024) region=(322, 22, 558, 291)
12:45:46-921409 INFO     Saving: image="D:\Stable Diffusion Files\Outputs\init-images\08707-5352462d-init-image.png"
                         type=PNG resolution=896x1024 size=0
12:45:47-680377 ERROR    Running script postprocess image: extensions\adetailer\scripts\!adetailer.py: TypeError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\automatic\modules\scripts.py:580 in postprocess_image                                                             │
│                                                                                                                      │
│   579 │   │   │   │   args = p.per_script_args.get(script.title(), p.script_args[script.args_from:script.args_to])   │
│ ❱ 580 │   │   │   │   script.postprocess_image(p, pp, *args)                                                         │
│   581 │   │   │   except Exception as e:                                                                             │
│                                                                                                                      │
│ D:\automatic\extensions\adetailer\adetailer\traceback.py:159 in wrapper                                              │
│                                                                                                                      │
│   158 │   │   │   │   error = RuntimeError(output)                                                                   │
│ ❱ 159 │   │   │   raise error from None                                                                              │
│   160                                                                                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError:
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                   System info                                                    │
│ ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │
│ ┃             ┃ Value                                                                                          ┃ │
│ ┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │
│ │    Platform │ Windows-10-10.0.22631-SP0                                                                      │ │
│ │      Python │ 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]               │ │
│ │     Version │ Unknown (too old or vladmandic)                                                                │ │
│ │      Commit │ Unknown                                                                                        │ │
│ │ Commandline │ ['launch.py', '--debug', '--upgrade', '--share']                                               │ │
│ │   Libraries │ {'torch': '2.2.0+cu121', 'torchvision': '0.17.0+cu121', 'ultralytics': '8.1.29', 'mediapipe':  │ │
│ │             │ '0.10.9'}                                                                                      │ │
│ └─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────┘ │
│                                                      Inputs                                                      │
│ ┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │
│ ┃                 ┃ Value                                                                                      ┃ │
│ ┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │
│ │          prompt │ (full body photograph), (highly detailed textures:1.4), young cla1re child wearing         │ │
│ │                 │ (sparkly skirt) and (pink shirt) and (sparkly black leggings) and (sparkly sneakers), lost │ │
│ │                 │ in an unfamiliar landscape, (wedge heel sneakers), (scared expression:1.2), magical        │ │
│ │                 │ elements, druid, celtic, irish, four leaf clover <lora:cla1re3 (20):1.0>                   │ │
│ │ negative_prompt │                                                                                            │ │
│ │          n_iter │ 1                                                                                          │ │
│ │      batch_size │ 1                                                                                          │ │
│ │           width │ 896                                                                                        │ │
│ │          height │ 1024                                                                                       │ │
│ │    sampler_name │ DDPM                                                                                       │ │
│ │       enable_hr │ False                                                                                      │ │
│ │     hr_upscaler │ None                                                                                       │ │
│ │      checkpoint │ lightningFusionXL_v14                                                                      │ │
│ │             vae │ None                                                                                       │ │
│ │            unet │ ------                                                                                     │ │
│ └─────────────────┴────────────────────────────────────────────────────────────────────────────────────────────┘ │
│                                           ADetailer                                                              │
│ ┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓                    │
│ ┃                     ┃ Value                                                               ┃                    │
│ ┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩                    │
│ │             version │ 24.3.1                                                              │                    │
│ │            ad_model │ face_yolov8m.pt                                                     │                    │
│ │           ad_prompt │ cla1re, highly detailed, scared expression, <lora:cla1re3 (20):1.0> │                    │
│ │  ad_negative_prompt │                                                                     │                    │
│ │ ad_controlnet_model │ None                                                                │                    │
│ │              is_api │ False                                                               │                    │
│ └─────────────────────┴─────────────────────────────────────────────────────────────────────┘                    │
│ ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮             │
│ │ D:\automatic\extensions\adetailer\adetailer\traceback.py:139 in wrapper                          │             │
│ │                                                                                                  │             │
│ │   138 │   │   try:                                                                               │             │
│ │ ❱ 139 │   │   │   return func(*args, **kwargs)                                                   │             │
│ │   140 │   │   except Exception as e:                                                             │             │
│ │                                                                                                  │             │
│ │ D:\automatic\extensions\adetailer\scripts\!adetailer.py:788 in postprocess_image                 │             │
│ │                                                                                                  │             │
│ │    787 │   │   │   │   │   continue                                                              │             │
│ │ ❱  788 │   │   │   │   is_processed |= self._postprocess_image_inner(p, pp, args, n=n)           │             │
│ │    789                                                                                           │             │
│ │                                                                                                  │             │
│ │ D:\automatic\extensions\adetailer\scripts\!adetailer.py:749 in _postprocess_image_inner          │             │
│ │                                                                                                  │             │
│ │    748 │   │   │   try:                                                                          │             │
│ │ ❱  749 │   │   │   │   processed = process_images(p2)                                            │             │
│ │    750 │   │   │   except NansException as e:                                                    │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\processing.py:193 in process_images                                         │             │
│ │                                                                                                  │             │
│ │   192 │   │   │   with context_hypertile_vae(p), context_hypertile_unet(p):                      │             │
│ │ ❱ 193 │   │   │   │   processed = process_images_inner(p)                                        │             │
│ │   194                                                                                            │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\processing.py:264 in process_images_inner                                   │             │
│ │                                                                                                  │             │
│ │   263 │   │   │   with devices.autocast():                                                       │             │
│ │ ❱ 264 │   │   │   │   p.init(p.all_prompts, p.all_seeds, p.all_subseeds)                         │             │
│ │   265 │   │   extra_network_data = None                                                          │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\processing_class.py:423 in init                                             │             │
│ │                                                                                                  │             │
│ │   422 │   │   │   │   if image.width != self.width or image.height != self.height:               │             │
│ │ ❱ 423 │   │   │   │   │   image = images.resize_image(3, image, self.width, self.height, self.   │             │
│ │   424 │   │   │   if self.image_mask is not None and self.inpainting_fill != 1:                  │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\images.py:292 in resize_image                                               │             │
│ │                                                                                                  │             │
│ │   291 │   elif resize_mode == 3: # fill                                                          │             │
│ │ ❱ 292 │   │   res = fill(im)                                                                     │             │
│ │   293 │   elif resize_mode == 4: # edge                                                          │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\images.py:280 in fill                                                       │             │
│ │                                                                                                  │             │
│ │   279 │   │   ratio = min(width / im.width, height / im.height)                                  │             │
│ │ ❱ 280 │   │   im = resize(im, im.width * ratio, im.height * ratio)                               │             │
│ │   281 │   │   res = Image.new(im.mode, (width, height), color=color)                             │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\images.py:229 in resize                                                     │             │
│ │                                                                                                  │             │
│ │   228 │   │   if upscaler_name is None or upscaler_name == "None" or im.mode == 'L':             │             │
│ │ ❱ 229 │   │   │   return im.resize((w, h), resample=Image.Resampling.LANCZOS) # force for mask   │             │
│ │   230 │   │   scale = max(w / im.width, h / im.height)                                           │             │
│ │                                                                                                  │             │
│ │ D:\automatic\venv\lib\site-packages\PIL\Image.py:2200 in resize                                  │             │
│ │                                                                                                  │             │
│ │   2199 │   │                                                                                     │             │
│ │ ❱ 2200 │   │   return self._new(self.im.resize(size, resample, box))                             │             │
│ │   2201                                                                                           │             │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯             │
│ TypeError: 'float' object cannot be interpreted as an integer                                                    │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

12:45:47-775787 INFO     Saving: image="D:\Stable Diffusion Files\Outputs\text\08180-lightningFusionXL_v14-full body
                         photograph highly detailed textures 1 4.png" type=PNG resolution=896x1024 size=0
12:45:48-099781 INFO     Processed: images=1 time=40.80 its=0.20 memory={'ram': {'used': 14.49, 'total': 31.9}, 'gpu':
                         {'used': 1.61, 'total': 8.0}, 'retries': 0, 'oom': 0}
12:46:00-490042 DEBUG    Server: alive=True jobs=1 requests=50 uptime=180 memory=14.46/31.9 backend=Backend.DIFFUSERS
                         state=job="" 0/1
12:46:23-159796 INFO     MOTD: N/A
12:46:37-423292 DEBUG    Themes: builtin=12 gradio=5 huggingface=55
12:47:13-805900 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64;
                         rv:123.0) Gecko/20100101 Firefox/123.0
12:47:58-261124 DEBUG    Paste prompt: type="current" prompt="(full body photograph), (highly detailed textures:1.4),
                         young cla1re child wearing (sparkly skirt) and (pink shirt) and (sparkly black leggings) and
                         (sparkly sneakers), lost in an unfamiliar landscape, (wedge heel sneakers), (scared
                         expression:1.2), magical elements, druid, celtic, irish, four leaf clover <lora:cla1re3
                         (20):1.0>
                         Steps: 8, Seed: 1621613978, Sampler: DDPM, CFG scale: 1, Size: 896x1024, Parser: Full parser,
                         Model: lightningFusionXL_v14, Model hash: fe5ad21d7f, Clip skip: 2, Backend: Diffusers, App:
                         SD.Next, Version: 7ad038d, Operations: txt2img, Hypertile UNet: 448, ADetailer model:
                         face_yolov8m.pt, ADetailer prompt: "cla1re, highly detailed, scared expression, <lora:cla1re3
                         (20):1.0>", ADetailer confidence: 0.75, ADetailer mask only top k largest: 1, ADetailer dilate
                         erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only
                         masked: True, ADetailer inpaint padding: 32, ADetailer model 2nd:
                         mediapipe_face_mesh_eyes_only, ADetailer prompt 2nd: "purple magic eyes, EyeDetail-SDXL,
                         <lora:Stunning_eyes_2:1.0>", ADetailer confidence 2nd: 0.75, ADetailer mask only top k largest
                         2nd: 1, ADetailer dilate erode 2nd: 4, ADetailer mask blur 2nd: 4, ADetailer denoising strength
                         2nd: 0.4, ADetailer inpaint only masked 2nd: True, ADetailer inpaint padding 2nd: 32, ADetailer
                         model 3rd: hand_yolov8n.pt, ADetailer prompt 3rd: young girl hand, ADetailer negative prompt
                         3rd: bad-hands-SDXL, ADetailer confidence 3rd: 0.5, ADetailer dilate erode 3rd: 4, ADetailer
                         mask blur 3rd: 4, ADetailer denoising strength 3rd: 0.4, ADetailer inpaint only masked 3rd:
                         True, ADetailer inpaint padding 3rd: 32, ADetailer model 4th: hand_yolov8n.pt, ADetailer prompt
                         4th: young girl hand, ADetailer negative prompt 4th: bad-hands-SDXL, ADetailer confidence 4th:
                         0.5, ADetailer dilate erode 4th: 4, ADetailer mask blur 4th: 4, ADetailer denoising strength
                         4th: 0.3, ADetailer inpaint only masked 4th: True, ADetailer inpaint padding 4th: 32, ADetailer
                         version: 24.3.1, Lora hashes: "cla1re3 (20): be178959", Sampler options: rescale beta,
                         Pipeline: StableDiffusionXLPipeline"
12:47:58-270385 DEBUG    Settings overrides: []
12:48:00-400909 DEBUG    Server: alive=True jobs=1 requests=399 uptime=300 memory=14.47/31.9 backend=Backend.DIFFUSERS
                         state=job="" 0/1
12:48:32-266248 INFO     Applying hypertile: unet=448
12:48:32-273427 DEBUG    Pipeline class change: original=StableDiffusionXLInpaintPipeline
                         target=StableDiffusionXLPipeline device=cpu fn=init
12:48:32-281405 INFO     LoRA apply: ['cla1re3 (20)'] patch=0.00 load=0.00
12:48:32-284909 INFO     Base: class=StableDiffusionXLPipeline
12:48:33-208990 DEBUG    Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE
                         set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]),
                         'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds':
                         torch.Size([1, 1280]), 'guidance_scale': 1, 'generator': device(type='cuda'),
                         'num_inference_steps': 8, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_end': None,
                         'output_type': 'latent', 'width': 896, 'height': 1024, 'parser': 'Full parser'}
12:48:33-218207 DEBUG    Sampler: sampler="DDPM" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end':
                         0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'variance_type':
                         'fixed_small', 'clip_sample': False, 'thresholding': False, 'clip_sample_range': 1.0,
                         'sample_max_value': 1.0, 'timestep_spacing': 'leading', 'rescale_betas_zero_snr': True}
Progress  1.95it/s ███████████████████████████████████ 100% 8/8 00:04 00:00 Base
12:48:37-533948 INFO     High memory utilization: GPU=84% RAM=50% {'ram': {'used': 15.87, 'total': 31.9}, 'gpu':
                         {'used': 6.68, 'total': 8.0}, 'retries': 0, 'oom': 0}
12:48:37-818391 DEBUG    GC: collected=12745 device=cuda {'ram': {'used': 15.87, 'total': 31.9}, 'gpu': {'used': 6.68,
                         'total': 8.0}, 'retries': 0, 'oom': 0} time=0.29

0: 640x576 1 face, 462.4ms
Speed: 2.0ms preprocess, 462.4ms inference, 1.2ms postprocess per image at shape (1, 3, 640, 576)
12:48:47-837609 INFO     Applying hypertile: unet=448
12:48:47-843610 DEBUG    Pipeline class change: original=StableDiffusionXLPipeline
                         target=StableDiffusionXLInpaintPipeline device=cpu fn=init
12:48:47-846115 TRACE    Run mask: fn=init
12:48:47-846618 TRACE    Mask args legacy: blur=4 padding=32
12:48:47-848380 TRACE    Mask shape=(1024, 896) opts=namespace(model=None, auto_mask='None', mask_only=False,
                         mask_blur=0.018, mask_erode=0.01, mask_dilate=0.14285714285714285, seg_iou_thresh=0.5,
                         seg_score_thresh=0.5, seg_nms_thresh=0.5, seg_overlap_ratio=0.3, seg_points_per_batch=64,
                         seg_topK=50, seg_colormap='pink', preview_type='Composite', seg_live=True, weight_original=0.5,
                         weight_mask=0.5, kernel_iterations=1, invert=False)
12:48:47-850883 TRACE    Mask erode=0.010 kernel=(3, 3) mask=(1024, 896)
12:48:47-851884 TRACE    Mask dilate=0.143 kernel=(33, 33) mask=(1024, 896)
12:48:47-856938 TRACE    Mask blur=0.018 x=5 y=5 mask=(1024, 896)
12:48:47-857939 DEBUG    Mask: size=896x1024 masked=34839px area=0.04 auto=None blur=0.018 erode=0.01
                         dilate=0.14285714285714285 type=Grayscale time=0.01
12:48:47-864940 TRACE    Mask crop: mask=(896, 1024) region=(382, 0, 621, 260) pad=32
12:48:47-865917 TRACE    Mask expand: image=(896, 1024) processing=(896, 1024) region=(382, 0, 621, 273)
12:48:47-934947 INFO     Saving: image="D:\Stable Diffusion Files\Outputs\init-images\08708-e4b2eebc-init-image.png"
                         type=PNG resolution=896x1024 size=0
12:48:48-585782 ERROR    Running script postprocess image: extensions\adetailer\scripts\!adetailer.py: TypeError
╭───────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────╮
│ D:\automatic\modules\scripts.py:580 in postprocess_image                                                             │
│                                                                                                                      │
│   579 │   │   │   │   args = p.per_script_args.get(script.title(), p.script_args[script.args_from:script.args_to])   │
│ ❱ 580 │   │   │   │   script.postprocess_image(p, pp, *args)                                                         │
│   581 │   │   │   except Exception as e:                                                                             │
│                                                                                                                      │
│ D:\automatic\extensions\adetailer\adetailer\traceback.py:159 in wrapper                                              │
│                                                                                                                      │
│   158 │   │   │   │   error = RuntimeError(output)                                                                   │
│ ❱ 159 │   │   │   raise error from None                                                                              │
│   160                                                                                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError:
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                   System info                                                    │
│ ┏━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │
│ ┃             ┃ Value                                                                                          ┃ │
│ ┡━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │
│ │    Platform │ Windows-10-10.0.22631-SP0                                                                      │ │
│ │      Python │ 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]               │ │
│ │     Version │ Unknown (too old or vladmandic)                                                                │ │
│ │      Commit │ Unknown                                                                                        │ │
│ │ Commandline │ ['launch.py', '--debug', '--upgrade', '--share']                                               │ │
│ │   Libraries │ {'torch': '2.2.0+cu121', 'torchvision': '0.17.0+cu121', 'ultralytics': '8.1.29', 'mediapipe':  │ │
│ │             │ '0.10.9'}                                                                                      │ │
│ └─────────────┴────────────────────────────────────────────────────────────────────────────────────────────────┘ │
│                                                      Inputs                                                      │
│ ┏━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │
│ ┃                 ┃ Value                                                                                      ┃ │
│ ┡━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │
│ │          prompt │ (full body photograph), (highly detailed textures:1.4), young cla1re child wearing         │ │
│ │                 │ (sparkly skirt) and (pink shirt) and (sparkly black leggings) and (sparkly sneakers), lost │ │
│ │                 │ in an unfamiliar landscape, (wedge heel sneakers), (scared expression:1.2), magical        │ │
│ │                 │ elements, druid, celtic, irish, four leaf clover <lora:cla1re3 (20):1.0>                   │ │
│ │ negative_prompt │                                                                                            │ │
│ │          n_iter │ 1                                                                                          │ │
│ │      batch_size │ 1                                                                                          │ │
│ │           width │ 896                                                                                        │ │
│ │          height │ 1024                                                                                       │ │
│ │    sampler_name │ DDPM                                                                                       │ │
│ │       enable_hr │ False                                                                                      │ │
│ │     hr_upscaler │ None                                                                                       │ │
│ │      checkpoint │ lightningFusionXL_v14                                                                      │ │
│ │             vae │ None                                                                                       │ │
│ │            unet │ ------                                                                                     │ │
│ └─────────────────┴────────────────────────────────────────────────────────────────────────────────────────────┘ │
│                                           ADetailer                                                              │
│ ┏━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓                    │
│ ┃                     ┃ Value                                                               ┃                    │
│ ┡━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩                    │
│ │             version │ 24.3.1                                                              │                    │
│ │            ad_model │ face_yolov8m.pt                                                     │                    │
│ │           ad_prompt │ cla1re, highly detailed, scared expression, <lora:cla1re3 (20):1.0> │                    │
│ │  ad_negative_prompt │                                                                     │                    │
│ │ ad_controlnet_model │ None                                                                │                    │
│ │              is_api │ False                                                               │                    │
│ └─────────────────────┴─────────────────────────────────────────────────────────────────────┘                    │
│ ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮             │
│ │ D:\automatic\extensions\adetailer\adetailer\traceback.py:139 in wrapper                          │             │
│ │                                                                                                  │             │
│ │   138 │   │   try:                                                                               │             │
│ │ ❱ 139 │   │   │   return func(*args, **kwargs)                                                   │             │
│ │   140 │   │   except Exception as e:                                                             │             │
│ │                                                                                                  │             │
│ │ D:\automatic\extensions\adetailer\scripts\!adetailer.py:788 in postprocess_image                 │             │
│ │                                                                                                  │             │
│ │    787 │   │   │   │   │   continue                                                              │             │
│ │ ❱  788 │   │   │   │   is_processed |= self._postprocess_image_inner(p, pp, args, n=n)           │             │
│ │    789                                                                                           │             │
│ │                                                                                                  │             │
│ │ D:\automatic\extensions\adetailer\scripts\!adetailer.py:749 in _postprocess_image_inner          │             │
│ │                                                                                                  │             │
│ │    748 │   │   │   try:                                                                          │             │
│ │ ❱  749 │   │   │   │   processed = process_images(p2)                                            │             │
│ │    750 │   │   │   except NansException as e:                                                    │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\processing.py:193 in process_images                                         │             │
│ │                                                                                                  │             │
│ │   192 │   │   │   with context_hypertile_vae(p), context_hypertile_unet(p):                      │             │
│ │ ❱ 193 │   │   │   │   processed = process_images_inner(p)                                        │             │
│ │   194                                                                                            │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\processing.py:264 in process_images_inner                                   │             │
│ │                                                                                                  │             │
│ │   263 │   │   │   with devices.autocast():                                                       │             │
│ │ ❱ 264 │   │   │   │   p.init(p.all_prompts, p.all_seeds, p.all_subseeds)                         │             │
│ │   265 │   │   extra_network_data = None                                                          │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\processing_class.py:423 in init                                             │             │
│ │                                                                                                  │             │
│ │   422 │   │   │   │   if image.width != self.width or image.height != self.height:               │             │
│ │ ❱ 423 │   │   │   │   │   image = images.resize_image(3, image, self.width, self.height, self.   │             │
│ │   424 │   │   │   if self.image_mask is not None and self.inpainting_fill != 1:                  │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\images.py:292 in resize_image                                               │             │
│ │                                                                                                  │             │
│ │   291 │   elif resize_mode == 3: # fill                                                          │             │
│ │ ❱ 292 │   │   res = fill(im)                                                                     │             │
│ │   293 │   elif resize_mode == 4: # edge                                                          │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\images.py:280 in fill                                                       │             │
│ │                                                                                                  │             │
│ │   279 │   │   ratio = min(width / im.width, height / im.height)                                  │             │
│ │ ❱ 280 │   │   im = resize(im, im.width * ratio, im.height * ratio)                               │             │
│ │   281 │   │   res = Image.new(im.mode, (width, height), color=color)                             │             │
│ │                                                                                                  │             │
│ │ D:\automatic\modules\images.py:229 in resize                                                     │             │
│ │                                                                                                  │             │
│ │   228 │   │   if upscaler_name is None or upscaler_name == "None" or im.mode == 'L':             │             │
│ │ ❱ 229 │   │   │   return im.resize((w, h), resample=Image.Resampling.LANCZOS) # force for mask   │             │
│ │   230 │   │   scale = max(w / im.width, h / im.height)                                           │             │
│ │                                                                                                  │             │
│ │ D:\automatic\venv\lib\site-packages\PIL\Image.py:2200 in resize                                  │             │
│ │                                                                                                  │             │
│ │   2199 │   │                                                                                     │             │
│ │ ❱ 2200 │   │   return self._new(self.im.resize(size, resample, box))                             │             │
│ │   2201                                                                                           │             │
│ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯             │
│ TypeError: 'float' object cannot be interpreted as an integer                                                    │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

12:48:48-652889 INFO     Saving: image="D:\Stable Diffusion Files\Outputs\text\08181-lightningFusionXL_v14-full body
                         photograph highly detailed textures 1 4.png" type=PNG resolution=896x1024 size=0
12:48:48-967852 INFO     High memory utilization: GPU=22% RAM=63% {'ram': {'used': 20.14, 'total': 31.9}, 'gpu':
                         {'used': 1.72, 'total': 8.0}, 'retries': 0, 'oom': 0}
12:48:49-225266 DEBUG    GC: collected=2586 device=cuda {'ram': {'used': 20.13, 'total': 31.9}, 'gpu': {'used': 1.5,
                         'total': 8.0}, 'retries': 0, 'oom': 0} time=0.26
12:48:49-226271 INFO     Processed: images=1 time=16.95 its=0.47 memory={'ram': {'used': 20.13, 'total': 31.9}, 'gpu':
                         {'used': 1.5, 'total': 8.0}, 'retries': 0, 'oom': 0}
12:48:49-749818 INFO     High memory utilization: GPU=19% RAM=63% {'ram': {'used': 20.11, 'total': 31.9}, 'gpu':
                         {'used': 1.5, 'total': 8.0}, 'retries': 0, 'oom': 0}
12:48:50-003679 DEBUG    GC: collected=0 device=cuda {'ram': {'used': 20.11, 'total': 31.9}, 'gpu': {'used': 1.5,
                         'total': 8.0}, 'retries': 0, 'oom': 0} time=0.25
12:49:59-937024 DEBUG    Server: alive=True jobs=1 requests=444 uptime=420 memory=20.11/31.9 backend=Backend.DIFFUSERS
                         state=job="" 0/2
12:52:00-254251 DEBUG    Server: alive=True jobs=1 requests=456 uptime=540 memory=20.11/31.9 backend=Backend.DIFFUSERS
                         state=job="" 0/2

Version Platform Description

Python 3.10.6 Dev branch d0e35a7a Windows 11 Professional Nvidia RTX 3070 8GB Mozilla Firefox v123.0.1

URL link of the extension

https://github.com/Bing-su/adetailer

URL link of the issue reported in the extension repository

No response

Acknowledgements

MysticDaedra commented 6 months ago

Update: I think this might actually not be an extension error at all, but rather a bug with inpainting. Trying to inpaint with adetailer disabled returned the following:

13:26:27-956949 INFO     Applying hypertile: unet=512
13:26:27-972644 TRACE    Run mask: fn=init
13:26:27-986379 TRACE    Mask args legacy: blur=4 padding=32
13:26:27-994252 TRACE    Mask shape=(4096, 3584) opts=namespace(model=None, auto_mask='None', mask_only=False, mask_blur=0.004, mask_erode=0.01, mask_dilate=0.03571428571428571, seg_iou_thresh=0.5,
                         seg_score_thresh=0.5, seg_nms_thresh=0.5, seg_overlap_ratio=0.3, seg_points_per_batch=64, seg_topK=50, seg_colormap='pink', preview_type='Composite', seg_live=True,
                         weight_original=0.5, weight_mask=0.5, kernel_iterations=1, invert=False)
13:26:28-001889 TRACE    Mask erode=0.010 kernel=(9, 9) mask=(4096, 3584)
13:26:28-008891 TRACE    Mask dilate=0.036 kernel=(33, 33) mask=(4096, 3584)
13:26:28-041902 TRACE    Mask blur=0.004 x=4 y=4 mask=(4096, 3584)
13:26:28-044904 DEBUG    Mask: size=3584x4096 masked=209003px area=0.01 auto=None blur=0.004 erode=0.01 dilate=0.03571428571428571 type=Grayscale time=0.07
13:26:28-102549 TRACE    Mask crop: mask=(3584, 4096) region=(1247, 326, 1838, 908) pad=32
13:26:28-104551 TRACE    Mask expand: image=(3584, 4096) processing=(1024, 1024) region=(1247, 322, 1838, 913)
13:26:28-221868 INFO     Saving: image="D:\Stable Diffusion Files\Outputs\init-images\08764-2bd7c7f3-init-image.png" type=PNG resolution=3584x4096 size=0
13:26:33-182553 ERROR    Exception: 'float' object cannot be interpreted as an integer
13:26:33-183553 ERROR    Arguments: args=('task(62hq0p859fnlnjd)', 2.0, 'young cla1re, highly detailed, (scared expression:1.2), fFaceDetail-SDXL EyeDetail-SDXL <lora:cla1re3 (20):1.0>', '', [],
                         <PIL.Image.Image image mode=RGBA size=1792x2048 at 0x22EC66F9A50>, None, {'image': <PIL.Image.Image image mode=RGBA size=3584x4096 at 0x22EC66FBDF0>, 'mask': <PIL.Image.Image image
                         mode=RGB size=3584x4096 at 0x22EC66FB070>}, None, None, None, None, 10, 3, 4, 1, 1, True, False, False, 1, 1, 1.2, 6, 0.7, 0, 1, 0, 1, 0.4, -1.0, -1.0, 0, 0, 0, 0, 1024, 1024, 1, 1,
                         'None', 1, 32, 0, None, '', '', '', 0, 0, 0, 0, False, 4, 0.95, False, 0.6, 1, '#000000', 0, [], 0, 1, 'None', 'None', 'None', 'None', 0.5, 0.5, 0.5, 0.5, None, None, None, None, 0, 0,
                         0, 0, 1, 1, 1, 1, 'None', 16, 'None', 1, True, 'None', 2, True, 1, 0, True, 'none', 3, 4, 0.25, 0.25, False, False, {'ad_model': 'face_yolov8m.pt', 'ad_model_classes': '', 'ad_prompt':
                         'cla1re, highly detailed, (scared expression:1.2), fFaceDetail-SDXL EyeDetail-SDXL <lora:cla1re3 (20):1.0>', 'ad_negative_prompt': '', 'ad_confidence': 0.75, 'ad_mask_k_largest': 0,
                         'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength':
                         0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps':
                         False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE',
                         'ad_use_sampler': False, 'ad_sampler': 'Default', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False,
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()},
                         {'ad_model': 'mediapipe_face_mesh_eyes_only', 'ad_model_classes': '', 'ad_prompt': 'purple magic eyes, EyeDetail-SDXL <lora:Stunning_eyes_2:1.0>', 'ad_negative_prompt': '',
                         'ad_confidence': 0.75, 'ad_mask_k_largest': 1, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None',
                         'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512,
                         'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint',
                         'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,
                         'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0,
                         'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_model_classes': '', 'ad_prompt': 'young girl hand', 'ad_negative_prompt': 'bad-hands-SDXL',
                         'ad_confidence': 0.6, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None',
                         'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512,
                         'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint',
                         'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,
                         'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0,
                         'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'hand_yolov8s.pt', 'ad_model_classes': '', 'ad_prompt': 'young girl hand', 'ad_negative_prompt': 'bad-hands-SDXL',
                         'ad_confidence': 0.6, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None',
                         'ad_mask_blur': 4, 'ad_denoising_strength': 0.3, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512,
                         'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint',
                         'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False,
                         'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0,
                         'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0,
                         'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength':
                         0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps':
                         False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE',
                         'ad_use_sampler': False, 'ad_sampler': 'Default', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False,
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()},
                         {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1,
                         'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True,
                         'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28,
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler':
                         False, 'ad_sampler': 'Default', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False,
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()},
                         {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1,
                         'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True,
                         'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28,
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler':
                         False, 'ad_sampler': 'Default', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False,
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()},
                         {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1,
                         'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True,
                         'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28,
                         'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler':
                         False, 'ad_sampler': 'Default', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False,
                         'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, '', '',
                         0.5, True, 1, False, 'None', None, 'None', 16, 'None', 2, True, 1, 0, 'none', 3, 4, 0.25, 0.25, 0.5, 0.5, 0.1, 1, True, '', 0.5, 0.9, '', 0.5, 0.9, 4, 0.5, 'Linear', 'None',
                         '<span>&nbsp Outpainting</span><br>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False,
                         '', '<span>&nbsp SD Upscale</span><br>', 128, 31, 2, 'SVD 1.0', 14, True, 1, 3, 6, 0.5, 0.1, 'None', 2, True, 1, 0, 0, '', [], 0, '', [], 0, '', [], False, True, False, False, False,
                         False, 0, '<p style="margin-bottom:0.75em">Will upscale the image depending on the selected target size type</p>', 512, 0, 8, 32, 64, 0.35, 32, 0, True, 0, False, 8, 0, 0, 2048, 2048,
                         2, 'None', [], 'FaceID Base', True, True, 1, 1, 1, 0.5, True, 'person', 1, 0.5, True) kwargs={}
13:26:33-206410 ERROR    gradio call: TypeError
╭────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────────╮
│ D:\automatic\modules\call_queue.py:31 in f                                                                                                                                                                    │
│                                                                                                                                                                                                               │
│   30 │   │   │   try:                                                                                                                                                                                         │
│ ❱ 31 │   │   │   │   res = func(*args, **kwargs)                                                                                                                                                              │
│   32 │   │   │   │   progress.record_results(id_task, res)                                                                                                                                                    │
│                                                                                                                                                                                                               │
│ D:\automatic\modules\img2img.py:264 in img2img                                                                                                                                                                │
│                                                                                                                                                                                                               │
│   263 │   │   if processed is None:                                                                                                                                                                           │
│ ❱ 264 │   │   │   processed = processing.process_images(p)                                                                                                                                                    │
│   265 │   p.close()                                                                                                                                                                                           │
│                                                                                                                                                                                                               │
│ D:\automatic\modules\processing.py:193 in process_images                                                                                                                                                      │
│                                                                                                                                                                                                               │
│   192 │   │   │   with context_hypertile_vae(p), context_hypertile_unet(p):                                                                                                                                   │
│ ❱ 193 │   │   │   │   processed = process_images_inner(p)                                                                                                                                                     │
│   194                                                                                                                                                                                                         │
│                                                                                                                                                                                                               │
│ D:\automatic\modules\processing.py:264 in process_images_inner                                                                                                                                                │
│                                                                                                                                                                                                               │
│   263 │   │   │   with devices.autocast():                                                                                                                                                                    │
│ ❱ 264 │   │   │   │   p.init(p.all_prompts, p.all_seeds, p.all_subseeds)                                                                                                                                      │
│   265 │   │   extra_network_data = None                                                                                                                                                                       │
│                                                                                                                                                                                                               │
│ D:\automatic\modules\processing_class.py:423 in init                                                                                                                                                          │
│                                                                                                                                                                                                               │
│   422 │   │   │   │   if image.width != self.width or image.height != self.height:                                                                                                                            │
│ ❱ 423 │   │   │   │   │   image = images.resize_image(3, image, self.width, self.height, self.resize_name)                                                                                                    │
│   424 │   │   │   if self.image_mask is not None and self.inpainting_fill != 1:                                                                                                                               │
│                                                                                                                                                                                                               │
│ D:\automatic\modules\images.py:292 in resize_image                                                                                                                                                            │
│                                                                                                                                                                                                               │
│   291 │   elif resize_mode == 3: # fill                                                                                                                                                                       │
│ ❱ 292 │   │   res = fill(im)                                                                                                                                                                                  │
│   293 │   elif resize_mode == 4: # edge                                                                                                                                                                       │
│                                                                                                                                                                                                               │
│ D:\automatic\modules\images.py:280 in fill                                                                                                                                                                    │
│                                                                                                                                                                                                               │
│   279 │   │   ratio = min(width / im.width, height / im.height)                                                                                                                                               │
│ ❱ 280 │   │   im = resize(im, im.width * ratio, im.height * ratio)                                                                                                                                            │
│   281 │   │   res = Image.new(im.mode, (width, height), color=color)                                                                                                                                          │
│                                                                                                                                                                                                               │
│ D:\automatic\modules\images.py:229 in resize                                                                                                                                                                  │
│                                                                                                                                                                                                               │
│   228 │   │   if upscaler_name is None or upscaler_name == "None" or im.mode == 'L':                                                                                                                          │
│ ❱ 229 │   │   │   return im.resize((w, h), resample=Image.Resampling.LANCZOS) # force for mask                                                                                                                │
│   230 │   │   scale = max(w / im.width, h / im.height)                                                                                                                                                        │
│                                                                                                                                                                                                               │
│ D:\automatic\venv\lib\site-packages\PIL\Image.py:2200 in resize                                                                                                                                               │
│                                                                                                                                                                                                               │
│   2199 │   │                                                                                                                                                                                                  │
│ ❱ 2200 │   │   return self._new(self.im.resize(size, resample, box))                                                                                                                                          │
│   2201                                                                                                                                                                                                        │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: 'float' object cannot be interpreted as an integer
13:28:00-145295 DEBUG    Server: alive=True jobs=1 requests=1346 uptime=2700 memory=8.71/31.9 backend=Backend.DIFFUSERS state=job="upscale batch 1/72" 0/72

As you can see, this is almost exactly the same error that was being returned before when adetailer was trying to run... only this time, with regular unadulterated inpainting.

vladmandic commented 6 months ago

fixed. it was simple float vs int thing.