vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.47k stars 400 forks source link

[Extension]: ADetailer - None type error in Img2Img if batchcount > 1 #2944

Closed cgidesign-de closed 5 months ago

cgidesign-de commented 6 months ago

Issue Description

Steps to reproduce:

  1. Set SD.Next to diffusors backend, VAE = automatic, Refiner = None, Pipeline = autodetect
  2. Create image in Text2Image
  3. Send created image to Image2Image
  4. Set Batchcount in Image2Image to 2
  5. Enable ADetailer Extension in Image2Image
  6. Run Image2Image

Image creation starts and ADetailer starts finding the face and is creating the new face details. Once this process is finished the error occures - see attached log. If Batchcount is set to 1 instead of 2 the error does not occure and the final image gets created successfully.

In Text2Image both cases work without error - Batchcount =1 or Batchcount = 2 are both finishing successfully.

LOG:

`2024-03-04 09:21:44,700 | sd | INFO | launch | Starting SD.Next
2024-03-04 09:21:44,703 | sd | INFO | installer | Logger: file="F:\automatic\sdnext.log" level=DEBUG size=65 mode=create
2024-03-04 09:21:44,704 | sd | INFO | installer | Python 3.10.11 on Windows
2024-03-04 09:21:44,766 | sd | INFO | installer | Version: app=sd.next updated=2024-02-24 hash=c1dfb1b2 url=https://github.com/vladmandic/automatic/tree/master
2024-03-04 09:21:45,080 | sd | INFO | installer | Latest published version: 912237ecf7d5b3616a272f983f3f59cc405f64c3 2024-03-01T14:24:24Z
2024-03-04 09:21:45,090 | sd | INFO | launch | Platform: arch=AMD64 cpu=Intel64 Family 6 Model 183 Stepping 1, GenuineIntel system=Windows release=Windows-10-10.0.22631-SP0 python=3.10.11
2024-03-04 09:21:45,094 | sd | DEBUG | installer | Setting environment tuning
2024-03-04 09:21:45,095 | sd | DEBUG | installer | HF cache folder: D:\huggingface\hub
2024-03-04 09:21:45,096 | sd | DEBUG | installer | Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
2024-03-04 09:21:45,097 | sd | DEBUG | installer | Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
2024-03-04 09:21:45,098 | sd | INFO | installer | nVidia CUDA toolkit detected: nvidia-smi present
2024-03-04 09:21:45,099 | sd | DEBUG | installer | Installing torch: torch torchvision --index-url https://download.pytorch.org/whl/cu121
2024-03-04 09:21:45,141 | sd | DEBUG | installer | Repository update time: Sat Feb 24 14:23:08 2024
2024-03-04 09:21:45,142 | sd | INFO | launch | Startup: standard
2024-03-04 09:21:45,143 | sd | INFO | installer | Verifying requirements
2024-03-04 09:21:45,152 | sd | INFO | installer | Verifying packages
2024-03-04 09:21:45,154 | sd | INFO | installer | Verifying submodules
2024-03-04 09:21:47,497 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-chainner / main
2024-03-04 09:21:47,532 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-system-info / main
2024-03-04 09:21:47,564 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-agent-scheduler / main
2024-03-04 09:21:47,597 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-controlnet / main
2024-03-04 09:21:47,645 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main
2024-03-04 09:21:47,676 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
2024-03-04 09:21:47,710 | sd | DEBUG | installer | Submodule: modules/k-diffusion / master
2024-03-04 09:21:47,743 | sd | DEBUG | installer | Submodule: wiki / master
2024-03-04 09:21:47,764 | sd | DEBUG | paths | Register paths
2024-03-04 09:21:47,825 | sd | DEBUG | installer | Installed packages: 252
2024-03-04 09:21:47,826 | sd | DEBUG | installer | Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
2024-03-04 09:21:47,939 | sd | DEBUG | installer | Running extension installer: F:\automatic\extensions-builtin\sd-extension-system-info\install.py
2024-03-04 09:21:48,158 | sd | DEBUG | installer | Running extension installer: F:\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
2024-03-04 09:21:48,376 | sd | DEBUG | installer | Running extension installer: F:\automatic\extensions-builtin\sd-webui-controlnet\install.py
2024-03-04 09:21:48,600 | sd | DEBUG | installer | Running extension installer: F:\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py
2024-03-04 09:21:48,824 | sd | DEBUG | installer | Running extension installer: F:\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
2024-03-04 09:21:49,041 | sd | DEBUG | installer | Extensions all: ['adetailer']
2024-03-04 09:21:49,042 | sd | DEBUG | installer | Running extension installer: F:\automatic\extensions\adetailer\install.py
2024-03-04 09:21:49,293 | sd | INFO | installer | Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'adetailer']
2024-03-04 09:21:49,295 | sd | INFO | installer | Verifying requirements
2024-03-04 09:21:49,302 | sd | DEBUG | launch | Setup complete without errors: 1709540509
2024-03-04 09:21:49,306 | sd | DEBUG | installer | Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
2024-03-04 09:21:49,307 | sd | DEBUG | launch | Starting module: <module 'webui' from 'F:\\automatic\\webui.py'>
2024-03-04 09:21:49,308 | sd | INFO | launch | Command line args: ['--debug'] debug=True
2024-03-04 09:21:49,309 | sd | DEBUG | launch | Env flags: []
2024-03-04 09:21:53,982 | sd | DEBUG | installer | Package not found: olive-ai
2024-03-04 09:21:55,090 | sd | INFO | loader | Load packages: {'torch': '2.2.0+cu121', 'diffusers': '0.26.3', 'gradio': '3.43.2'}
2024-03-04 09:21:55,700 | sd | DEBUG | shared | Read: file="config.json" json=34 bytes=1484 time=0.001
2024-03-04 09:21:55,703 | sd | DEBUG | shared | Unknown settings: ['cross_attention_options', 'multiple_tqdm']
2024-03-04 09:21:55,704 | sd | INFO | shared | Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product" mode=no_grad
2024-03-04 09:21:55,737 | sd | INFO | shared | Device: device=NVIDIA GeForce RTX 4070 Ti SUPER n=1 arch=sm_90 cap=(8, 9) cuda=12.1 cudnn=8801 driver=551.23
2024-03-04 09:21:57,463 | sd | DEBUG | __init__ | ONNX: version=1.17.0 provider=CUDAExecutionProvider, available=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
2024-03-04 09:21:57,516 | sd | DEBUG | sd_hijack | Importing LDM
2024-03-04 09:21:57,525 | sd | DEBUG | webui | Entering start sequence
2024-03-04 09:21:57,527 | sd | DEBUG | webui | Initializing
2024-03-04 09:21:57,537 | sd | INFO | sd_vae | Available VAEs: path="models\VAE" items=1
2024-03-04 09:21:57,539 | sd | INFO | extensions | Disabled extensions: ['sd-webui-controlnet']
2024-03-04 09:21:57,540 | sd | DEBUG | modelloader | Scanning diffusers cache: ['models\\Diffusers'] items=0 time=0.00
2024-03-04 09:21:57,544 | sd | DEBUG | shared | Read: file="cache.json" json=1 bytes=5112 time=0.003
2024-03-04 09:21:57,548 | sd | DEBUG | shared | Read: file="metadata.json" json=29 bytes=16730 time=0.003
2024-03-04 09:21:57,551 | sd | INFO | sd_models | Available models: path="models\Stable-diffusion" items=28 time=0.01
2024-03-04 09:21:57,582 | sd | DEBUG | webui | Load extensions
2024-03-04 09:21:57,600 | sd | INFO | networks | LoRA networks: available=0 folders=2
2024-03-04 09:21:57,602 | sd | INFO | script_loading | Extension: script='extensions-builtin\Lora\scripts\lora_script.py' 09:21:57-600834 INFO     LoRA networks: available=0 folders=2
2024-03-04 09:21:57,811 | sd | INFO | script_loading | Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
2024-03-04 09:21:59,139 | sd | INFO | script_loading | Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized. version: 24.3.0, num models: 10
2024-03-04 09:21:59,148 | sd | DEBUG | webui | Extensions init time: 1.57 sd-webui-agent-scheduler=0.19 stable-diffusion-webui-images-browser=0.46 adetailer=0.87
2024-03-04 09:21:59,159 | sd | DEBUG | shared | Read: file="html/upscalers.json" json=4 bytes=2672 time=0.004
2024-03-04 09:21:59,163 | sd | DEBUG | shared | Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.003
2024-03-04 09:21:59,165 | sd | DEBUG | chainner_model | chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=0
2024-03-04 09:21:59,167 | sd | DEBUG | modelloader | Load upscalers: total=52 downloaded=0 user=0 time=0.02 ['None', 'Lanczos', 'Nearest', 'ChaiNNer', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
2024-03-04 09:21:59,616 | sd | DEBUG | styles | Load styles: folder="models\styles" items=288 time=0.45
2024-03-04 09:21:59,618 | sd | DEBUG | webui | Creating UI
2024-03-04 09:21:59,619 | sd | INFO | theme | UI theme: name="black-orange" style=Auto base=sdnext.css
2024-03-04 09:21:59,623 | sd | DEBUG | ui_txt2img | UI initialize: txt2img
2024-03-04 09:21:59,640 | sd | DEBUG | shared | Read: file="html\reference.json" json=37 bytes=20003 time=0.003
2024-03-04 09:21:59,646 | sd | DEBUG | ui_extra_networks | Extra networks: page='model' items=65 subfolders=3 tab=txt2img folders=['models\\Stable-diffusion', 'models\\Diffusers', 'models\\Reference'] list=0.01 thumb=0.00 desc=0.00 info=0.00 workers=4
2024-03-04 09:21:59,657 | sd | DEBUG | ui_extra_networks | Extra networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles', 'html'] list=0.01 thumb=0.00 desc=0.00 info=0.00 workers=4
2024-03-04 09:21:59,660 | sd | DEBUG | ui_extra_networks | Extra networks: page='embedding' items=0 subfolders=0 tab=txt2img folders=['models\\embeddings'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
2024-03-04 09:21:59,663 | sd | DEBUG | ui_extra_networks | Extra networks: page='hypernetwork' items=0 subfolders=0 tab=txt2img folders=['models\\hypernetworks'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
2024-03-04 09:21:59,665 | sd | DEBUG | ui_extra_networks | Extra networks: page='vae' items=1 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
2024-03-04 09:21:59,667 | sd | DEBUG | ui_extra_networks | Extra networks: page='lora' items=0 subfolders=0 tab=txt2img folders=['models\\Lora', 'models\\LyCORIS'] list=0.00 thumb=0.00 desc=0.00 info=0.00 workers=4
2024-03-04 09:21:59,727 | sd | DEBUG | ui_img2img | UI initialize: img2img
2024-03-04 09:21:59,850 | sd | DEBUG | ui_control_helpers | UI initialize: control models=models\control
2024-03-04 09:22:00,010 | sd | DEBUG | shared | Read: file="ui-config.json" json=15 bytes=526 time=0.004
2024-03-04 09:22:00,061 | sd | DEBUG | theme | Themes: builtin=12 gradio=5 huggingface=55
2024-03-04 09:22:00,850 | sd | DEBUG | ui_extensions | Extension list: processed=339 installed=8 enabled=7 disabled=1 visible=339 hidden=0
2024-03-04 09:22:00,946 | sd | DEBUG | webui | Root paths: ['F:\\automatic']
2024-03-04 09:22:01,007 | sd | INFO | webui | Local URL: http://127.0.0.1:7860/
2024-03-04 09:22:01,007 | sd | DEBUG | webui | Gradio functions: registered=2401
2024-03-04 09:22:01,009 | sd | DEBUG | middleware | FastAPI middleware: ['Middleware', 'Middleware']
2024-03-04 09:22:01,011 | sd | DEBUG | webui | Creating API
2024-03-04 09:22:01,239 | sd | INFO | task_runner | [AgentScheduler] Task queue is empty
2024-03-04 09:22:01,240 | sd | INFO | api | [AgentScheduler] Registering APIs
2024-03-04 09:22:01,323 | sd | DEBUG | webui | Scripts setup: ['IP Adapters:0.01', 'AnimateDiff:0.006', 'ADetailer:0.043', 'X/Y/Z Grid:0.006', 'Face:0.008']
2024-03-04 09:22:01,325 | sd | DEBUG | sd_models | Model metadata: file="metadata.json" no changes
2024-03-04 09:22:01,326 | sd | DEBUG | webui | Model auto load disabled
2024-03-04 09:22:01,327 | sd | DEBUG | script_callbacks | Script callback init time: image_browser.py:ui_tabs=0.31 system-info.py:app_started=0.17 task_scheduler.py:app_started=0.10
2024-03-04 09:22:01,329 | sd | DEBUG | shared | Save: file="config.json" json=34 bytes=1437 time=0.001
2024-03-04 09:22:01,330 | sd | INFO | webui | Startup time: 12.02 torch=4.62 olive=0.06 gradio=1.11 libraries=2.43 extensions=1.57 networks=0.45 ui-en=0.15 ui-img2img=0.05 ui-control=0.07 ui-settings=0.13 ui-extensions=0.73 ui-defaults=0.05 launch=0.10 app-started=0.27
2024-03-04 09:22:01,331 | sd | DEBUG | shared | Unused settings: ['cross_attention_options', 'multiple_tqdm']
2024-03-04 09:22:30,400 | sd | DEBUG | modeldata | Model requested: fn=txt2img
2024-03-04 09:22:30,401 | sd | INFO | sd_models | Select: model="copaxTimelessxlSDXL1_v9 [c967070428]"
2024-03-04 09:22:30,402 | sd | DEBUG | sd_models | Load model: existing=False target=F:\automatic\models\Stable-diffusion\copaxTimelessxlSDXL1_v9.safetensors info=None
2024-03-04 09:22:30,434 | sd | DEBUG | devices | Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False
2024-03-04 09:22:30,436 | sd | INFO | devices | Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=no_grad fp16=True bf16=None optimization=Scaled-Dot-Product
2024-03-04 09:22:30,438 | sd | DEBUG | sd_models | Diffusers loading: path="F:\automatic\models\Stable-diffusion\copaxTimelessxlSDXL1_v9.safetensors"
2024-03-04 09:22:30,439 | sd | INFO | sd_models | Autodetect: model="Stable Diffusion XL" class=StableDiffusionXLPipeline file="F:\automatic\models\Stable-diffusion\copaxTimelessxlSDXL1_v9.safetensors" size=6617MB
2024-03-04 09:22:34,337 | sd | DEBUG | sd_models | Setting model: pipeline=StableDiffusionXLPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'extract_ema': True, 'use_safetensors': True}
2024-03-04 09:22:34,340 | sd | DEBUG | sd_models | Setting model: enable VAE slicing
2024-03-04 09:22:35,959 | sd | INFO | textual_inversion | Load embeddings: loaded=0 skipped=0 time=0.00
2024-03-04 09:22:36,115 | sd | DEBUG | devices | GC: collected=7718 device=cuda {'ram': {'used': 8.7, 'total': 63.78}, 'gpu': {'used': 8.07, 'total': 15.99}, 'retries': 0, 'oom': 0} time=0.15
2024-03-04 09:22:36,123 | sd | INFO | sd_models | Load model: time=5.56 load=5.56 native=1024 {'ram': {'used': 8.7, 'total': 63.78}, 'gpu': {'used': 8.07, 'total': 15.99}, 'retries': 0, 'oom': 0}
2024-03-04 09:22:36,306 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_end': None, 'width': 1024, 'height': 1280, 'parser': 'Full parser'}
2024-03-04 09:22:36,338 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'rescale_betas_zero_snr': False}
2024-03-04 09:22:51,645 | sd | DEBUG | sd_vae_taesd | VAE load: type=taesd model=models\TAESD\taesdxl_decoder.pth
2024-03-04 09:22:52,463 | sd | DEBUG | sd_models | Pipeline class change: original=StableDiffusionXLPipeline target=StableDiffusionXLInpaintPipeline
2024-03-04 09:22:52,473 | sd | DEBUG | masking | Mask: size=1024x1280 masked=11634px area=0.01 auto=None blur=0.016 erode=0.125 dilate=0.01 type=Grayscale time=0.01
2024-03-04 09:22:52,480 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=L size=159x198 at 0x1581C781030> mode=2 target=1024x1280 upscaler=None function=init
2024-03-04 09:22:52,500 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=RGB size=159x198 at 0x1582291F3A0> mode=3 target=1024x1280 upscaler=None function=init
2024-03-04 09:22:52,594 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLInpaintPipeline task=DiffusersTaskType.INPAINTING set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 125, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_start': None, 'denoising_end': None, 'image': [<PIL.Image.Image image mode=RGB size=1024x1280 at 0x15826344AF0>], 'mask_image': <PIL.Image.Image image mode=L size=1024x1280 at 0x1582637F040>, 'strength': 0.4, 'height': 1280, 'width': 1024, 'parser': 'Full parser'}
2024-03-04 09:22:52,605 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'rescale_betas_zero_snr': False}
2024-03-04 09:23:09,011 | sd | INFO | devices | High memory utilization: GPU=92% RAM=3% {'ram': {'used': 2.09, 'total': 63.78}, 'gpu': {'used': 14.72, 'total': 15.99}, 'retries': 0, 'oom': 0}
2024-03-04 09:23:09,178 | sd | DEBUG | devices | GC: collected=2173 device=cuda {'ram': {'used': 2.09, 'total': 63.78}, 'gpu': {'used': 8.28, 'total': 15.99}, 'retries': 0, 'oom': 0} time=0.17
2024-03-04 09:23:12,922 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=RGB size=1024x1280 at 0x1581C389150> mode=2 target=159x198 upscaler=None function=apply_overlay
2024-03-04 09:23:12,939 | sd | INFO | processing | Processed: images=1 time=20.48 its=2.44 memory={'ram': {'used': 2.12, 'total': 63.78}, 'gpu': {'used': 9.01, 'total': 15.99}, 'retries': 0, 'oom': 0}
2024-03-04 09:23:12,962 | sd | INFO | images | Saving: image="outputs\text\2024-03-04\00018.png" type=PNG resolution=1024x1280 size=0
2024-03-04 09:23:13,298 | sd | DEBUG | sd_models | Pipeline class change: original=StableDiffusionXLInpaintPipeline target=StableDiffusionXLPipeline
2024-03-04 09:23:13,380 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_end': None, 'width': 1024, 'height': 1280, 'parser': 'Full parser'}
2024-03-04 09:23:13,390 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'rescale_betas_zero_snr': False}
2024-03-04 09:23:27,696 | sd | DEBUG | masking | Mask: size=1024x1280 masked=12698px area=0.01 auto=None blur=0.016 erode=0.125 dilate=0.01 type=Grayscale time=0.01
2024-03-04 09:23:27,703 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=L size=162x202 at 0x158263BE830> mode=2 target=1024x1280 upscaler=None function=init
2024-03-04 09:23:27,722 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=RGB size=162x202 at 0x158263BE740> mode=3 target=1024x1280 upscaler=None function=init
2024-03-04 09:23:27,823 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLInpaintPipeline task=DiffusersTaskType.INPAINTING set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 125, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_start': None, 'denoising_end': None, 'image': [<PIL.Image.Image image mode=RGB size=1024x1280 at 0x1581C3D2290>], 'mask_image': <PIL.Image.Image image mode=L size=1024x1280 at 0x158263BE530>, 'strength': 0.4, 'height': 1280, 'width': 1024, 'parser': 'Full parser'}
2024-03-04 09:23:27,835 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'rescale_betas_zero_snr': False}
2024-03-04 09:23:42,871 | sd | INFO | devices | High memory utilization: GPU=92% RAM=3% {'ram': {'used': 2.1, 'total': 63.78}, 'gpu': {'used': 14.74, 'total': 15.99}, 'retries': 0, 'oom': 0}
2024-03-04 09:23:43,037 | sd | DEBUG | devices | GC: collected=2399 device=cuda {'ram': {'used': 2.1, 'total': 63.78}, 'gpu': {'used': 8.24, 'total': 15.99}, 'retries': 0, 'oom': 0} time=0.17
2024-03-04 09:23:45,171 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=RGB size=1024x1280 at 0x1581C388280> mode=2 target=162x202 upscaler=None function=apply_overlay
2024-03-04 09:23:45,187 | sd | INFO | devices | High memory utilization: GPU=100% RAM=7% {'ram': {'used': 4.55, 'total': 63.78}, 'gpu': {'used': 15.99, 'total': 15.99}, 'retries': 0, 'oom': 0}
2024-03-04 09:23:45,451 | sd | DEBUG | devices | GC: collected=124 device=cuda {'ram': {'used': 2.49, 'total': 63.78}, 'gpu': {'used': 8.31, 'total': 15.99}, 'retries': 0, 'oom': 0} time=0.26
2024-03-04 09:23:45,454 | sd | INFO | processing | Processed: images=1 time=17.77 its=2.81 memory={'ram': {'used': 2.49, 'total': 63.78}, 'gpu': {'used': 8.31, 'total': 15.99}, 'retries': 0, 'oom': 0}
2024-03-04 09:23:45,476 | sd | INFO | images | Saving: image="outputs\text\2024-03-04\00019.png" type=PNG resolution=1024x1280 size=0
2024-03-04 09:23:45,778 | sd | INFO | processing | Processed: images=2 time=69.65 its=1.44 memory={'ram': {'used': 2.44, 'total': 63.78}, 'gpu': {'used': 8.31, 'total': 15.99}, 'retries': 0, 'oom': 0}
2024-03-04 09:23:46,438 | sd | DEBUG | gr_tempdir | Saving temp: image="C:\Users\home\AppData\Local\Temp\gradio\tmpgc724d0h.png" resolution=2048x1280 size=4395713
2024-03-04 09:23:59,569 | sd | DEBUG | launch | Server: alive=True jobs=1 requests=144 uptime=124 memory=2.17/63.78 backend=Backend.DIFFUSERS state=job="txt2img" 0/1
2024-03-04 09:25:31,233 | sd | DEBUG | sd_models | Pipeline class change: original=StableDiffusionXLInpaintPipeline target=StableDiffusionXLImg2ImgPipeline
2024-03-04 09:25:31,485 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLImg2ImgPipeline task=DiffusersTaskType.IMAGE_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 167, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_start': None, 'denoising_end': None, 'image': [<PIL.Image.Image image mode=RGB size=1024x1280 at 0x15826346560>], 'strength': 0.3, 'parser': 'Full parser'}
2024-03-04 09:25:31,495 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'rescale_betas_zero_snr': False}
2024-03-04 09:25:52,609 | sd | DEBUG | sd_models | Pipeline class change: original=StableDiffusionXLImg2ImgPipeline target=StableDiffusionXLInpaintPipeline
2024-03-04 09:25:52,619 | sd | DEBUG | masking | Mask: size=1024x1280 masked=11289px area=0.01 auto=None blur=0.016 erode=0.125 dilate=0.01 type=Grayscale time=0.01
2024-03-04 09:25:52,626 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=L size=157x196 at 0x158263BDF60> mode=2 target=1024x1280 upscaler=None function=init
2024-03-04 09:25:52,645 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=RGB size=157x196 at 0x158263BEE60> mode=3 target=1024x1280 upscaler=None function=init
2024-03-04 09:25:52,738 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLInpaintPipeline task=DiffusersTaskType.INPAINTING set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 101, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_start': None, 'denoising_end': None, 'image': [<PIL.Image.Image image mode=RGB size=1024x1280 at 0x1582637C6D0>], 'mask_image': <PIL.Image.Image image mode=L size=1024x1280 at 0x158263BE050>, 'strength': 0.5, 'height': 1280, 'width': 1024, 'parser': 'Full parser'}
2024-03-04 09:25:52,749 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'rescale_betas_zero_snr': False}
2024-03-04 09:25:59,568 | sd | DEBUG | launch | Server: alive=True jobs=1 requests=214 uptime=244 memory=2.29/63.78 backend=Backend.DIFFUSERS state=job="txt2img" 0/2
2024-03-04 09:26:07,755 | sd | INFO | devices | High memory utilization: GPU=92% RAM=4% {'ram': {'used': 2.26, 'total': 63.78}, 'gpu': {'used': 14.74, 'total': 15.99}, 'retries': 0, 'oom': 0}
2024-03-04 09:26:07,937 | sd | DEBUG | devices | GC: collected=2491 device=cuda {'ram': {'used': 2.26, 'total': 63.78}, 'gpu': {'used': 8.23, 'total': 15.99}, 'retries': 0, 'oom': 0} time=0.18
2024-03-04 09:26:10,190 | sd | DEBUG | images | Image resize: input=<PIL.Image.Image image mode=RGB size=1024x1280 at 0x157FAC384C0> mode=2 target=157x196 upscaler=None function=apply_overlay
2024-03-04 09:26:10,206 | sd | INFO | devices | High memory utilization: GPU=100% RAM=7% {'ram': {'used': 4.72, 'total': 63.78}, 'gpu': {'used': 15.99, 'total': 15.99}, 'retries': 0, 'oom': 0}
2024-03-04 09:26:10,476 | sd | DEBUG | devices | GC: collected=124 device=cuda {'ram': {'used': 2.66, 'total': 63.78}, 'gpu': {'used': 8.34, 'total': 15.99}, 'retries': 0, 'oom': 0} time=0.27
2024-03-04 09:26:10,480 | sd | INFO | processing | Processed: images=1 time=17.87 its=2.80 memory={'ram': {'used': 2.66, 'total': 63.78}, 'gpu': {'used': 8.34, 'total': 15.99}, 'retries': 0, 'oom': 0}
2024-03-04 09:26:10,503 | sd | INFO | images | Saving: image="outputs\image\2024-03-04\00014.png" type=PNG resolution=1024x1280 size=0
2024-03-04 09:26:10,991 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLInpaintPipeline task=DiffusersTaskType.INPAINTING set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 167, 'eta': 1.0, 'guidance_rescale': 0.7, 'denoising_start': None, 'denoising_end': None, 'image': [<PIL.Image.Image image mode=RGB size=1024x1280 at 0x157F7AC7AF0>], 'mask_image': None, 'strength': 0.3, 'height': 1280, 'width': 1024, 'parser': 'Full parser'}
2024-03-04 09:26:11,004 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'rescale_betas_zero_snr': False}
2024-03-04 09:26:11,022 | sd | ERROR | call_queue | Exception: 'NoneType' object is not iterable
2024-03-04 09:26:11,023 | sd | ERROR | call_queue | Arguments: args=('task(fiogiqe17pvfrat)', 0.0, 'woman, sitting on chair, opulent, velvet, burgundy, 19th century', 'porn, sex, nude, naked', [], <PIL.Image.Image image mode=RGBA size=1024x1280 at 0x15826345DE0>, None, None, None, None, None, None, 50, 13, 4, 1, 1, True, False, False, 2, 1, 5, 6, 0.7, 0, 1, 0, 1, 0.3, -1.0, -1.0, 0, 0, 0, 0, 1280, 1024, 1, 0, 'None', 0, 32, 0, None, '', '', '', 0, 0, 0, 0, False, 4, 0.95, False, 0.6, 1, '#000000', 0, [], 0, 1, 'None', 'None', 'None', 'None', 0.5, 0.5, 0.5, 0.5, None, None, None, None, 0, 0, 0, 0, 1, 1, 1, 1, 'None', 16, 'None', 1, True, 'None', 2, True, 1, 0, True, 'none', 3, 4, 0.25, 0.25, True, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.7, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.5, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 5.5, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, '', '', 0.5, True, 1, False, 'None', None, 4, 0.5, 'Linear', 'None', '<span>&nbsp Outpainting</span><br>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<span>&nbsp SD Upscale</span><br>', 64, 0, 2, 14, True, 1, 3, 6, 0.5, 0.1, 'None', 2, True, 1, 0, 0, '', [], 0, '', [], 0, '', [], False, True, False, False, False, False, 0, 'None', [], 'FaceID Base', True, True, 1, 1, 1, 0.5, False, 'person', 1, 0.5, True) kwargs={}
2024-03-04 09:26:11,065 | sd | ERROR | errors | gradio call: TypeError`

Version Platform Description

app: SD.next updated: 2024-02-24 hash: c1dfb1b2 url: https://github.com/vladmandic/automatic/tree/master

arch: AMD64 cpu: Intel64 Family 6 Model 183 Stepping 1, GenuineIntel system: Windows release: Windows-10-10.0.22631-SP0 python: 3.10.11

Torch: 2.2.0+cu121 Autocast half

GPU: device: NVIDIA GeForce RTX 4070 Ti SUPER (1) (sm_90) (8, 9) cuda: 12.1 cudnn: 8801 driver: 551.23

xformers: diffusers: 0.26.3 transformers: 4.37.2

device: active: cuda dtype: torch.float16 vae: torch.float16 unet: torch.float16

ADetailer: Created Wed Apr26 2023 07:54 | Added Fri May12 2023 00:00 | Pushed Tue Feb27 2024 12:58 | Updated Tue Jan23 2024 13:20

Browser: Firefox 123

URL link of the extension

https://github.com/Bing-su/adetailer

URL link of the issue reported in the extension repository

No response

Acknowledgements

vladmandic commented 6 months ago

with adetailer it actually sets masks and triggers processing in the main app with newly set params. but for batch-count>1, it looks like it doesn't set mask since it thinks processing is over, so what happens is that run on second batch is without mask being set thus causing error.

i have to admit, i'm getting a bit lost in adetailer internal logic, but problem seems to be around this function: https://github.com/Bing-su/adetailer/blob/3f1d1b9772aae767ff63be15be425348b0083324/scripts/!adetailer.py#L758

this should be reported upstream to adetailer. but given the past comments, not sure if batching is considered supported at all by adetailer - see https://github.com/Bing-su/adetailer/issues/434#issuecomment-1845232897

cgidesign-de commented 6 months ago

Thanks, I posted in there as well.

cgidesign-de commented 6 months ago

Bing-Su answered in the issue thread. He wrote, Batch-Mode (multiple images in one run) is not supported, but Batch-Count is ok. As of his comment, it works in webUI 1.8.0 with image2image.

vladmandic commented 6 months ago

that may be so, but i cannot trace it down within adetailer code to see if i a fix is needed on sdnext side, this is extremely convoluted from adetailer side.

cgidesign-de commented 6 months ago

Ok, pity, but that's life.

vladmandic commented 5 months ago

i've added a workaround for this issue. its not ideal, but its something since adetailer author is not going to look at it.

cgidesign-de commented 5 months ago

I tried with --upgrade flag to get latest sd.next - issue is still there.

We have to wait for a new release, right?

vladmandic commented 5 months ago

right. or use dev branch.

cgidesign-de commented 5 months ago

I have upgraded today - now it is working. Thanks for solving this :-)