Closed xddun closed 11 months ago
It does not support batch mode.
ok, thank you for your answer. I also hope to support in the future.
Dear Bing-su, for me batch processing works in Text2Image of vladmandic's automatic. It only fails within Image2Image. You can see the details in the post vladmandic linked above. Maybe batching was implemented for Text2Image but not for Image2Image? And, thanks for ADetailer :-)
@cgidesign-de In this post, batch mode means processing the generated images simultaneously, which is different from batch count and batch size, although they use the same word.
Besides, I have tested img2img with batch count > 2 in webui 1.8.0 and it works fine.
Python 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:27:34) [MSC v.1937 64 bit (AMD64)]
Version: v1.8.0-111-g8e82294f
Commit hash: 8e82294fda7db0bdcafeaa20573026a2fe47830a
Launching Web UI with arguments: --xformers --api
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.3.0, num models: 11
ControlNet preprocessor location: D:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-03-05 19:56:07,826 - ControlNet - INFO - ControlNet v1.1.440
2024-03-05 19:56:07,982 - ControlNet - INFO - ControlNet v1.1.440
Loading weights [5998292c04] from D:\stable-diffusion-webui\models\Stable-diffusion\Counterfeit-V3.0_fp16-no-ema.safetensors
2024-03-05 19:56:08,389 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: D:\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 15.5s (prepare environment: 2.6s, import torch: 4.8s, import gradio: 1.6s, setup paths: 0.8s, initialize shared: 0.3s, other imports: 0.7s, load scripts: 3.0s, create ui: 0.7s, gradio launch: 0.3s, add APIs: 0.6s).
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\kl-f8-anime2.safetensors
Applying attention optimization: xformers... done.
Model loaded in 4.2s (load weights from disk: 0.6s, create model: 1.1s, apply weights to model: 2.1s, load VAE: 0.1s, calculate empty prompt: 0.1s).
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:04<00:00, 3.23it/s]
Total progress: 50%|█████████████████████████████████ | 16/32 [00:03<00:03, 4.07it/s]
0: 640x448 2 faces, 80.0ms
Speed: 2.0ms preprocess, 80.0ms inference, 36.0ms postprocess per image at shape (1, 3, 640, 448)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 6.74it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 7.12it/s]
0: 640x448 1 face, 10.5ms
Speed: 1.0ms preprocess, 10.5ms inference, 2.0ms postprocess per image at shape (1, 3, 640, 448)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 7.17it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:03<00:00, 4.06it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 32/32 [00:14<00:00, 3.92it/s]
0: 640x448 2 faces, 8.5ms
Speed: 1.0ms preprocess, 8.5ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 448)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 7.20it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 7.19it/s]
0: 640x448 2 faces, 7.0ms
Speed: 2.0ms preprocess, 7.0ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 448)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 7.17it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 7.15it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 32/32 [00:22<00:00, 1.42it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:02<00:00, 6.95it/s]
Total progress: 33%|██████████████████████ | 16/48 [00:02<00:04, 7.22it/s]
0: 640x448 2 faces, 82.3ms
Speed: 2.5ms preprocess, 82.3ms inference, 2.0ms postprocess per image at shape (1, 3, 640, 448)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 7.18it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 7.22it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:02<00:00, 7.17it/s]
Total progress: 67%|████████████████████████████████████████████ | 32/48 [00:08<00:02, 6.85it/s]
0: 640x448 1 face, 5.5ms
Speed: 2.0ms preprocess, 5.5ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 448)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 7.22it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:02<00:00, 7.19it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 48/48 [00:13<00:00, 7.05it/s]
0: 640x448 1 face, 6.0ms
Speed: 0.5ms preprocess, 6.0ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 448)
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:01<00:00, 7.24it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 48/48 [00:15<00:00, 3.03it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 48/48 [00:15<00:00, 7.05it/s]
Thank you @Bing-su - I'll post in the vladmandic issue thread about it.
Question
Is this process processing one image at a time ? Does ADetailer support running in batch mode? This will definitely improve the running speed, but I see that the code seems to only be able to process one image at a time, is it because the principle itself is not supported?
Create an image. Detect object with a detection model and create a mask image. Inpaint using the image from 1 and the mask from 2.