vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.53k stars 407 forks source link

[Feature]: Improve Prompt from file script to set correct batch info #2385

Open djasil opened 11 months ago

djasil commented 11 months ago

Issue Description

Set the "Prompt from File" script and insert a simple prompt in the input field below.

It works better using the dynamic prompt extension, for example: {red|green|blue|yellow|pink} {bicycle|car|motorcycle|airplane|helicopter}

Generate 2-3 images (with multiple lines, batch or both). The correct info is displayed in the preview window for any image, but the metadata gets mixed up when you save them.

Only the seed is affected if the prompt is static, but with dynamic prompt things get pretty messy and also affects filenames in batches.

This only happens when using the "Prompt from File" script. Images generated from the default input field or saved automatically are fine.

Checking the two script options didn't fix it.

The log shows no errors.

Version Platform Description

Python 3.10.0 on Windows Version: app=sd.next updated=2023-10-22 hash=be75ed7e Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows release=Windows-10-10.0.19044-SP0 python=3.10.0 nVidia CUDA toolkit detected: nvidia-smi present Extensions: disabled=[] Extensions: enabled=['clip-interrogator-ext', 'Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg'] extensions-builtin Extensions: enabled=['adetailer', 'sd-dynamic-prompts', 'ultimate-upscale-for-automatic1111'] extensions Device: device=NVIDIA GeForce RTX 4080 n=1 arch=sm_90 cap=(8, 9) cuda=12.1 cudnn=8801 driver=545.84 Browser session: client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36

Relevant log output

No response

Backend

Original

Model

SD 1.5

Acknowledgements

vladmandic commented 11 months ago

does this happen with always save generated images or only when using save button?

djasil commented 11 months ago

Only when using the save button. Images saved automatically are fine.

vladmandic commented 11 months ago

that script has not been updated in ages. took a look and it was not registering seed that it used correctly, so metadata got messed up on save. fixed.

djasil commented 11 months ago

Updated with the script from c0ef02a, but the problem persists.

Am I missing something?

vladmandic commented 11 months ago

reproduce without dynamic prompts and upload more verbose log webui --debug, i'll reopen as needed.

djasil commented 11 months ago

Reproduced using --safe --debug.

Prompts: red car green bicycle blue airplane

The metadata preview for the third prompt "blue airplane" is displayed correctly... img1_preview

...but the image saved manually used the metadata of the first prompt "red car" img2_metadata

Running in batch mixes up the filenames as well.

LOG:

F:\Stable Diffusion\SD.Next>webui.bat --autolaunch --safe --debug
Using VENV: F:\Stable Diffusion\SD.Next\venv
18:59:30-993606 DEBUG    Logger: file=F:\Stable Diffusion\SD.Next\sdnext.log level=10 size=0 mode=create
18:59:30-998200 INFO     Starting SD.Next
18:59:30-998200 INFO     Python 3.10.0 on Windows
18:59:31-118744 INFO     Version: app=sd.next updated=2023-10-22 hash=4ecdf97e url=https://github.com/vladmandic/automatic.git/tree/master
18:59:31-587599 INFO     Latest published version: 3fec4d493d4e14426b47bae9e9ddd810acec0023 2023-10-22T18:55:59Z
18:59:31-603225 INFO     Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows release=Windows-10-10.0.19044-SP0 python=3.10.0
18:59:31-603225 DEBUG    Setting environment tuning
18:59:31-603225 DEBUG    Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
18:59:31-616343 DEBUG    Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
18:59:31-616343 INFO     nVidia CUDA toolkit detected: nvidia-smi present
18:59:31-710103 DEBUG    Repository update time: Sun Oct 22 13:38:54 2023
18:59:31-710103 INFO     Verifying requirements
18:59:31-725727 INFO     Verifying packages
18:59:31-725727 INFO     Verifying repositories
18:59:31-788228 DEBUG    Submodule: F:\Stable Diffusion\SD.Next\repositories\stable-diffusion-stability-ai / main
18:59:32-556708 DEBUG    Submodule: F:\Stable Diffusion\SD.Next\repositories\taming-transformers / master
18:59:34-558055 DEBUG    Submodule: F:\Stable Diffusion\SD.Next\repositories\BLIP / main
18:59:35-227204 INFO     Verifying submodules
18:59:37-106216 DEBUG    Submodule: extensions-builtin/clip-interrogator-ext / main
18:59:37-168716 DEBUG    Submodule: extensions-builtin/sd-extension-chainner / main
18:59:37-231216 DEBUG    Submodule: extensions-builtin/sd-extension-system-info / main
18:59:37-295381 DEBUG    Submodule: extensions-builtin/sd-webui-agent-scheduler / main
18:59:37-357882 DEBUG    Submodule: extensions-builtin/sd-webui-controlnet / main
18:59:37-436006 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main
18:59:37-498507 DEBUG    Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
18:59:37-559399 DEBUG    Submodule: modules/lora / main
18:59:37-639627 DEBUG    Submodule: wiki / master
18:59:37-795885 DEBUG    Installed packages: 219
18:59:37-795885 DEBUG    Extensions all: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
18:59:37-795885 DEBUG    Running extension installer: F:\Stable Diffusion\SD.Next\extensions-builtin\clip-interrogator-ext\install.py
18:59:42-628453 DEBUG    Running extension installer: F:\Stable Diffusion\SD.Next\extensions-builtin\sd-extension-system-info\install.py
18:59:43-009522 DEBUG    Running extension installer: F:\Stable Diffusion\SD.Next\extensions-builtin\sd-webui-agent-scheduler\install.py
18:59:43-358157 DEBUG    Running extension installer: F:\Stable Diffusion\SD.Next\extensions-builtin\sd-webui-controlnet\install.py
18:59:43-719707 DEBUG    Running extension installer: F:\Stable Diffusion\SD.Next\extensions-builtin\stable-diffusion-webui-images-browser\install.py
18:59:44-088917 DEBUG    Running extension installer: F:\Stable Diffusion\SD.Next\extensions-builtin\stable-diffusion-webui-rembg\install.py
18:59:44-447322 INFO     Extensions enabled: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
18:59:44-449502 INFO     Verifying requirements
18:59:44-459500 DEBUG    Setup complete without errors: 1698011984
18:59:44-460500 INFO     Running in safe mode without user extensions
18:59:44-465880 INFO     Extension preload: {'extensions-builtin': 0.0}
18:59:44-466881 DEBUG    Starting module: <module 'webui' from 'F:\\Stable Diffusion\\SD.Next\\webui.py'>
18:59:44-468880 INFO     Command line args: ['--autolaunch', '--safe', '--debug'] autolaunch=True debug=True safe=True
18:59:49-254429 DEBUG    Loaded packages: torch=2.1.0+cu121 diffusers=0.21.4 gradio=3.43.2
18:59:49-629100 DEBUG    Reading: config.json len=29
18:59:49-631100 DEBUG    Unknown settings: ['multiple_tqdm']
18:59:49-633103 INFO     Engine: backend=Backend.ORIGINAL compute=cuda mode=no_grad device=cuda cross-optimization="Scaled-Dot-Product"
18:59:49-682733 INFO     Device: device=NVIDIA GeForce RTX 4080 n=1 arch=sm_90 cap=(8, 9) cuda=12.1 cudnn=8801 driver=545.84
18:59:50-312516 DEBUG    Entering start sequence
18:59:50-315516 DEBUG    Initializing
18:59:50-318516 INFO     Available VAEs: models\VAE items=1
18:59:50-320516 INFO     Safe mode disabling extensions: ['sd-webui-controlnet', 'multidiffusion-upscaler-for-automatic1111', 'a1111-sd-webui-lycoris', 'sd-webui-agent-scheduler', 'clip-interrogator-ext', 'stable-diffusion-webui-rembg', 'sd-extension-chainner', 'stable-diffusion-webui-images-browser']
18:59:50-323516 DEBUG    Reading: cache.json len=2
18:59:50-325516 DEBUG    Reading: metadata.json len=35
18:59:50-327516 INFO     Available models: models\Stable-diffusion items=13 time=0.01s
18:59:50-478875 DEBUG    Loading extensions
18:59:51-466634 INFO     Extensions time: 0.99s { Lora=0.95s }
18:59:51-533633 DEBUG    Reading: html/upscalers.json len=4
18:59:51-537633 DEBUG    Loaded upscalers: total=32 downloaded=8 user=6 ['None', 'Lanczos', 'Nearest', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
18:59:51-545633 DEBUG    Loaded styles: folder=models\styles items=289
18:59:51-549632 DEBUG    Creating UI
18:59:51-839649 INFO     Loading UI theme: name=black-teal style=Auto base=style.css
18:59:51-874648 DEBUG    Extra networks: page='model' items=13 subdirs=5 tab=txt2img dirs=['models\\Stable-diffusion', 'models\\Diffusers', 'F:\\Stable Diffusion\\SD.Next\\models\\Stable-diffusion'] time=0.02s
18:59:51-891649 DEBUG    Extra networks: page='style' items=289 subdirs=2 tab=txt2img dirs=['models\\styles', 'html'] time=0.0s
18:59:51-895650 DEBUG    Extra networks: page='embedding' items=9 subdirs=1 tab=txt2img dirs=['models\\embeddings'] time=0.01s
18:59:51-898648 DEBUG    Extra networks: page='hypernetwork' items=0 subdirs=0 tab=txt2img dirs=['models\\hypernetworks'] time=0.0s
18:59:51-901648 DEBUG    Extra networks: page='vae' items=1 subdirs=1 tab=txt2img dirs=['models\\VAE'] time=0.0s
18:59:51-905647 DEBUG    FS walk error: [WinError 3] O sistema não pode encontrar o caminho especificado: 'F:\\Stable Diffusion\\SD.Next\\models\\LyCORIS' F:\Stable Diffusion\SD.Next\models\LyCORIS
18:59:51-908648 DEBUG    Extra networks: page='lora' items=22 subdirs=6 tab=txt2img dirs=['models\\Lora', 'models\\LyCORIS'] time=0.02s
18:59:52-059657 DEBUG    Reading: ui-config.json len=0
18:59:52-118174 DEBUG    Themes: builtin=6 default=5 external=55
18:59:52-155174 DEBUG    Reading: F:\Stable Diffusion\SD.Next\html\extensions.json len=310
18:59:52-753712 DEBUG    Extension list: processed=294 installed=8 enabled=2 disabled=6 visible=294 hidden=0
18:59:52-915924 INFO     Local URL: http://127.0.0.1:7860/
18:59:52-917926 DEBUG    Gradio registered functions: 870
18:59:52-918925 INFO     Initializing middleware
18:59:52-922925 DEBUG    Creating API
18:59:53-019910 DEBUG    Scripts setup: []
18:59:53-020910 DEBUG    Model metadata: metadata.json no changes
18:59:53-022911 INFO     Select: model="realistic\Realistic_Vision_v5.1_NoVAE [99a75a901f]"
18:59:53-025909 DEBUG    Load model weights: existing=False target=F:\Stable Diffusion\SD.Next\models\Stable-diffusion\realistic\Realistic_Vision_v5.1_NoVAE.safetensors info=None
Loading weights: F:\Stable Diffusion\SD.Next\models\Stable-diffusion\realistic\Realistic_Vision_v5.1_NoVAE.safetensors ---------------------------------------- 0.0/2.1 GB -:--:--
18:59:53-055910 DEBUG    Load model: name=F:\Stable Diffusion\SD.Next\models\Stable-diffusion\realistic\Realistic_Vision_v5.1_NoVAE.safetensors dict=True
18:59:53-084910 DEBUG    Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False
18:59:53-086910 INFO     Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=no_grad fp16=True bf16=False
18:59:53-089911 DEBUG    Model dict loaded: {'ram': {'used': 0.97, 'total': 63.89}, 'gpu': {'used': 1.32, 'total': 15.99}, 'retries': 0, 'oom': 0}
18:59:53-101910 DEBUG    Model config loaded: {'ram': {'used': 0.97, 'total': 63.89}, 'gpu': {'used': 1.32, 'total': 15.99}, 'retries': 0, 'oom': 0}
18:59:53-714425 INFO     LDM: LatentDiffusion: Running in eps-prediction mode
18:59:53-716426 INFO     LDM: DiffusionWrapper has 859.52 M params.
18:59:53-717425 DEBUG    Model created from config: F:\Stable Diffusion\SD.Next\configs\v1-inference.yaml
18:59:53-719426 INFO     Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline file="F:\Stable Diffusion\SD.Next\models\Stable-diffusion\realistic\Realistic_Vision_v5.1_NoVAE.safetensors" size=2034MB
18:59:53-721426 DEBUG    Model weights loading: {'ram': {'used': 1.94, 'total': 63.89}, 'gpu': {'used': 1.32, 'total': 15.99}, 'retries': 0, 'oom': 0}
Loading weights: models\VAE\vae-ft-mse-840000-ema-pruned.safetensors ---------------------------------------- 0.0/334.6 MB -:--:--
18:59:54-655438 DEBUG    Model weights loaded: {'ram': {'used': 7.24, 'total': 63.89}, 'gpu': {'used': 1.32, 'total': 15.99}, 'retries': 0, 'oom': 0}
18:59:54-936438 DEBUG    Model weights moved: {'ram': {'used': 7.24, 'total': 63.89}, 'gpu': {'used': 3.35, 'total': 15.99}, 'retries': 0, 'oom': 0}
18:59:54-944437 INFO     Cross-attention: optimization=Scaled-Dot-Product options=[]
18:59:55-132437 INFO     Loaded embeddings: loaded=9 skipped=0 time=0.18s
18:59:55-137438 INFO     Model loaded in 2.11s { create=0.62s apply=0.57s vae=0.36s move=0.28s embeddings=0.19s }
18:59:55-335438 DEBUG    gc: collected=514 device=cuda {'ram': {'used': 7.32, 'total': 63.89}, 'gpu': {'used': 3.35, 'total': 15.99}, 'retries': 0, 'oom': 0}
18:59:55-340437 INFO     Model load finished: {'ram': {'used': 7.32, 'total': 63.89}, 'gpu': {'used': 3.35, 'total': 15.99}, 'retries': 0, 'oom': 0} cached=0
18:59:55-396385 DEBUG    Saving: config.json len=1201
18:59:55-398382 DEBUG    Unused settings: ['multiple_tqdm']
18:59:55-400380 INFO     Startup time: 10.91s { torch=4.30s gradio=0.44s libraries=1.06s extensions=0.99s face-restore=0.15s upscalers=0.07s ui-extra-networks=0.36s ui-txt2img=0.07s ui-settings=0.10s ui-extensions=0.62s launch=0.12s api=0.07s checkpoint=2.38s }
18:59:55-405380 INFO     Launching browser
18:59:56-194242 INFO     MOTD: N/A
18:59:56-861391 DEBUG    Themes: builtin=6 default=5 external=55
18:59:56-942392 INFO     Browser session: client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36
18:59:59-546334 DEBUG    Server alive=True jobs=1 requests=12 uptime=10s memory used=5.34 total=63.89 idle
19:00:05-643956 DEBUG    txt2img:
                         id_task=task(bm2nv4bg0ck8e6g)|prompt=|negative_prompt=|prompt_styles=[]|steps=20|sampler_index=None|latent_index=None|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_f
                         rom_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_s
                         ettings_texts=[]
19:00:05-655957 INFO     Prompts-from-file: lines=3 jobs=3
19:00:05-858956 DEBUG    Sampler: sampler=Euler a config={'scheduler': 'default', 'brownian_noise': False}
 50%|¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦                                                                                                                                           | 10/20 [00:01<00:01,  9.68it/s]19:00:07-647965 DEBUG    Loaded VAE decode approximate: model="models\VAE-approx\model.pt"
100%|¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 20/20 [00:02<00:00,  9.44it/s]
19:00:08-776906 INFO     Processed: images=1 time=3.12s its=6.41 memory={'ram': {'used': 3.56, 'total': 63.89}, 'gpu': {'used': 3.39, 'total': 15.99}, 'retries': 0, 'oom': 0}
19:00:08-805905 DEBUG    Sampler: sampler=Euler a config={'scheduler': 'default', 'brownian_noise': False}
100%|¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 20/20 [00:00<00:00, 22.73it/s]
19:00:09-839909 INFO     Processed: images=1 time=1.06s its=18.90 memory={'ram': {'used': 2.15, 'total': 63.89}, 'gpu': {'used': 4.52, 'total': 15.99}, 'retries': 0, 'oom': 0}
19:00:09-873910 DEBUG    Sampler: sampler=Euler a config={'scheduler': 'default', 'brownian_noise': False}
100%|¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦¦| 20/20 [00:00<00:00, 22.70it/s]
19:00:10-820049 INFO     Processed: images=1 time=0.97s its=20.53 memory={'ram': {'used': 2.15, 'total': 63.89}, 'gpu': {'used': 4.52, 'total': 15.99}, 'retries': 0, 'oom': 0}
19:00:10-885049 DEBUG    Saving temp: image="C:\Users\SDT-PC~1\AppData\Local\Temp\gradio\tmpfla79szs.png"
19:00:10-962049 DEBUG    Saving temp: image="C:\Users\SDT-PC~1\AppData\Local\Temp\gradio\tmpz9wa_742.png"
19:00:11-058049 DEBUG    Saving temp: image="C:\Users\SDT-PC~1\AppData\Local\Temp\gradio\tmpeb43dhzn.png"
19:00:14-978885 DEBUG    Saving: image="outputs\save\00000-Realistic_Vision_v5.1_NoVAE-red car.webp" type=WEBP size=512x512
19:00:17-410168 DEBUG    Saving: image="outputs\save\00001-Realistic_Vision_v5.1_NoVAE-green bicycle.webp" type=WEBP size=512x512
19:00:25-374363 DEBUG    Saving: image="outputs\save\00002-Realistic_Vision_v5.1_NoVAE-blue airplane.webp" type=WEBP size=512x512
19:02:00-332855 DEBUG    Server alive=True jobs=1 requests=26 uptime=131s memory used=2.14 total=63.89 job="1 out of 3" 0/3
vladmandic commented 11 months ago

ok, second attempt of a fix is posted.

djasil commented 11 months ago

It fixed using multiple lines only, but still broken with batches.

Example: Generating the images 1-2-3-4 with 2 lines and batch count 2 are saved with metadata 2-4-4-4 and filenames 2-3-1-1.

vladmandic commented 11 months ago

batch processing for prompts-from-file is done within the script and does not follow general processing workflow. i can reopen the issue for batch workflow and when save is not done automatically - but to change that would mean to rewrite script which is really don't want to do right now.

prs are welcome.