vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.36k stars 382 forks source link

[Issue]: Generation speed drops when using loras #2602

Closed devils-shadow closed 7 months ago

devils-shadow commented 7 months ago

Issue Description

When using loras in prompts, generation speeds drop proportionately to the number of loras loaded into the prompt. This behavior is replicated consistently across both diffusers and original backends and is sampler agnostic.

Example: sdnext with diffusers, dreamshaper 1.5, all default settings, default extensions, default generation parameters, 50steps: 19it/s. Loading one lora into the prompt drops gen speed to about 11it/s and the speed drop continues as more loras are added.

https://github.com/vladmandic/automatic/assets/17099756/b4dfa5d7-1ce6-4a25-b154-65242ad6cf14

Version Platform Description

SDNext Version: app=sd.next updated=2023-12-04 hash=93f35ccf Python version 3.10.11 Windows 11 23H2 OS build 22631.2792 Nvidia 4070 Driver Version 546.29

Relevant log output

2023-12-07 02:23:59,028 | sd | INFO | launch | Starting SD.Next
2023-12-07 02:23:59,032 | sd | INFO | installer | Logger: file="E:\automatic\sdnext.log" level=DEBUG size=65 mode=create
2023-12-07 02:23:59,033 | sd | INFO | installer | Python 3.10.11 on Windows
2023-12-07 02:23:59,164 | sd | INFO | installer | Version: app=sd.next updated=2023-12-04 hash=93f35ccf url=https://github.com/vladmandic/automatic/tree/master
2023-12-07 02:23:59,518 | sd | INFO | launch | Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows release=Windows-10-10.0.22631-SP0 python=3.10.11
2023-12-07 02:23:59,519 | sd | DEBUG | installer | Setting environment tuning
2023-12-07 02:23:59,521 | sd | DEBUG | installer | Cache folder: C:\Users\devil\.cache\huggingface\hub
2023-12-07 02:23:59,522 | sd | DEBUG | installer | Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
2023-12-07 02:23:59,523 | sd | DEBUG | installer | Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
2023-12-07 02:23:59,525 | sd | INFO | installer | nVidia CUDA toolkit detected: nvidia-smi present
2023-12-07 02:23:59,605 | sd | DEBUG | installer | Repository update time: Mon Dec  4 20:31:52 2023
2023-12-07 02:23:59,605 | sd | INFO | launch | Startup: standard
2023-12-07 02:23:59,606 | sd | INFO | installer | Verifying requirements
2023-12-07 02:23:59,614 | sd | INFO | installer | Verifying packages
2023-12-07 02:23:59,615 | sd | INFO | installer | Verifying submodules
2023-12-07 02:24:01,580 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-chainner / main
2023-12-07 02:24:01,645 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-system-info / main
2023-12-07 02:24:01,709 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-agent-scheduler / main
2023-12-07 02:24:01,774 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-controlnet / main
2023-12-07 02:24:01,847 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main
2023-12-07 02:24:01,913 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
2023-12-07 02:24:01,979 | sd | DEBUG | installer | Submodule: modules/k-diffusion / master
2023-12-07 02:24:02,046 | sd | DEBUG | installer | Submodule: modules/lora / main
2023-12-07 02:24:02,115 | sd | DEBUG | installer | Submodule: wiki / master
2023-12-07 02:24:02,156 | sd | DEBUG | paths | Register paths
2023-12-07 02:24:02,248 | sd | DEBUG | installer | Installed packages: 225
2023-12-07 02:24:02,249 | sd | DEBUG | installer | Extensions all: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
2023-12-07 02:24:02,251 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\clip-interrogator-ext\install.py
2023-12-07 02:24:07,910 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\sd-extension-system-info\install.py
2023-12-07 02:24:08,224 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
2023-12-07 02:24:08,551 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py
2023-12-07 02:24:08,868 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
2023-12-07 02:24:09,194 | sd | DEBUG | installer | Extensions all: ['a1111-sd-webui-tagcomplete', 'adetailer', 'stable-diffusion-webui-wildcards', 'ultimate-upscale-for-automatic1111']
2023-12-07 02:24:09,283 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions\adetailer\install.py
2023-12-07 02:24:09,818 | sd | INFO | installer | Extensions enabled: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'a1111-sd-webui-tagcomplete', 'adetailer', 'stable-diffusion-webui-wildcards', 'ultimate-upscale-for-automatic1111']
2023-12-07 02:24:09,820 | sd | INFO | installer | Verifying requirements
2023-12-07 02:24:09,823 | sd | DEBUG | launch | Setup complete without errors: 1701908650
2023-12-07 02:24:09,830 | sd | INFO | installer | Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
2023-12-07 02:24:09,831 | sd | DEBUG | launch | Starting module: <module 'webui' from 'E:\\automatic\\webui.py'>
2023-12-07 02:24:09,832 | sd | INFO | launch | Command line args: ['--debug', '--share', '--insecure'] share=True insecure=True debug=True
2023-12-07 02:24:14,499 | sd | INFO | loader | Load packages: torch=2.1.1+cu121 diffusers=0.24.0 gradio=3.43.2
2023-12-07 02:24:15,042 | sd | DEBUG | shared | Read: file="config.json" json=44 bytes=1810
2023-12-07 02:24:15,045 | sd | DEBUG | shared | Unknown settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 02:24:15,046 | sd | INFO | shared | Engine: backend=Backend.DIFFUSERS compute=cuda mode=no_grad device=cuda cross-optimization="Scaled-Dot-Product"
2023-12-07 02:24:15,087 | sd | INFO | shared | Device: device=NVIDIA GeForce RTX 4070 n=1 arch=sm_90 cap=(8, 9) cuda=12.1 cudnn=8801 driver=546.29
2023-12-07 02:24:20,737 | sd | DEBUG | webui | Entering start sequence
2023-12-07 02:24:20,740 | sd | DEBUG | webui | Initializing
2023-12-07 02:24:20,742 | sd | INFO | sd_vae | Available VAEs: path="models\VAE" items=6
2023-12-07 02:24:20,744 | sd | INFO | shared | Disabling uncompatible extensions: backend=Backend.DIFFUSERS ['a1111-sd-webui-lycoris', 'sd-webui-animatediff']
2023-12-07 02:24:20,746 | sd | DEBUG | modelloader | Scanning diffusers cache: models\Diffusers models\Diffusers items=1 time=0.00
2023-12-07 02:24:20,751 | sd | DEBUG | shared | Read: file="cache.json" json=2 bytes=7741
2023-12-07 02:24:20,758 | sd | DEBUG | shared | Read: file="metadata.json" json=110 bytes=144719
2023-12-07 02:24:20,761 | sd | INFO | sd_models | Available models: path="models\Stable-diffusion" items=20 time=0.02
2023-12-07 02:24:21,311 | sd | DEBUG | webui | Load extensions
2023-12-07 02:24:22,478 | sd | INFO | script_loading | Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
2023-12-07 02:24:23,784 | sd | INFO | script_loading | Extension: script='extensions\a1111-sd-webui-tagcomplete\scripts\tag_autocomplete_helper.py' Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
2023-12-07 02:24:25,373 | sd | INFO | script_loading | Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized. version: 23.11.1, num models: 9
2023-12-07 02:24:25,383 | sd | INFO | webui | Extensions time: 4.07 { clip-interrogator-ext=0.73 sd-extension-chainner=0.05 sd-webui-agent-scheduler=0.32 stable-diffusion-webui-images-browser=0.12 stable-diffusion-webui-rembg=1.15 adetailer=1.59 }
2023-12-07 02:24:25,418 | sd | DEBUG | shared | Read: file="html/upscalers.json" json=4 bytes=2672
2023-12-07 02:24:25,424 | sd | DEBUG | shared | Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719
2023-12-07 02:24:25,426 | sd | DEBUG | chainner_model | chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=5
2023-12-07 02:24:25,430 | sd | DEBUG | modelloader | Load upscalers: total=52 downloaded=15 user=0 time=0.04 ['None', 'Lanczos', 'Nearest', 'ChaiNNer', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
2023-12-07 02:24:25,837 | sd | DEBUG | styles | Load styles: folder="models\styles" items=289 time=0.40
2023-12-07 02:24:25,841 | sd | DEBUG | webui | Creating UI
2023-12-07 02:24:26,057 | sd | INFO | theme | Load UI theme: name="black-teal" style=Auto base=sdnext.css
2023-12-07 02:24:26,105 | sd | DEBUG | shared | Read: file="html\reference.json" json=18 bytes=11012
2023-12-07 02:24:26,194 | sd | DEBUG | ui_extra_networks | Extra networks: page='model' items=38 subfolders=5 tab=txt2img folders=['models\\Stable-diffusion', 'models\\Diffusers', 'models\\Reference', 'E:\\automatic\\models\\Stable-diffusion'] list=0.04 desc=0.02 info=0.03 workers=2
2023-12-07 02:24:26,210 | sd | DEBUG | ui_extra_networks | Extra networks: page='style' items=289 subfolders=2 tab=txt2img folders=['models\\styles', 'html'] list=0.02 desc=0.00 info=0.00 workers=2
2023-12-07 02:24:26,213 | sd | DEBUG | ui_extra_networks | Extra networks: page='embedding' items=8 subfolders=1 tab=txt2img folders=['models\\embeddings'] list=0.02 desc=0.00 info=0.03 workers=2
2023-12-07 02:24:26,215 | sd | DEBUG | ui_extra_networks | Extra networks: page='hypernetwork' items=0 subfolders=1 tab=txt2img folders=['models\\hypernetworks'] list=0.00 desc=0.00 info=0.00 workers=2
2023-12-07 02:24:26,218 | sd | DEBUG | ui_extra_networks | Extra networks: page='vae' items=6 subfolders=1 tab=txt2img folders=['models\\VAE'] list=0.02 desc=0.00 info=0.02 workers=2
2023-12-07 02:24:26,222 | sd | DEBUG | ui_extra_networks | Extra networks: page='lora' items=32 subfolders=1 tab=txt2img folders=['models\\Lora', 'models\\LyCORIS'] list=0.03 desc=0.01 info=0.03 workers=2
2023-12-07 02:24:26,391 | sd | DEBUG | shared | Read: file="ui-config.json" json=0 bytes=2
2023-12-07 02:24:26,609 | sd | DEBUG | theme | Themes: builtin=6 default=5 external=55
2023-12-07 02:24:27,365 | sd | DEBUG | script_callbacks | Script: 0.67 ui_tabs E:\automatic\extensions-builtin\stable-diffusion-webui-images-browser\scripts\image_browser.py
2023-12-07 02:24:27,374 | sd | DEBUG | shared | Read: file="E:\automatic\html\extensions.json" json=328 bytes=191889
2023-12-07 02:24:28,401 | sd | DEBUG | ui_extensions | Extension list: processed=312 installed=13 enabled=11 disabled=2 visible=312 hidden=0
2023-12-07 02:24:30,984 | sd | INFO | webui | Local URL: http://127.0.0.1:7860/
2023-12-07 02:24:30,985 | sd | INFO | webui | Share URL: https://468a752225538c2ede.gradio.live
2023-12-07 02:24:30,986 | sd | DEBUG | webui | Gradio functions: registered=1751
2023-12-07 02:24:30,987 | sd | INFO | middleware | Initializing middleware
2023-12-07 02:24:30,991 | sd | DEBUG | webui | Creating API
2023-12-07 02:24:31,118 | sd | INFO | task_runner | [AgentScheduler] Task queue is empty
2023-12-07 02:24:31,119 | sd | INFO | api | [AgentScheduler] Registering APIs
2023-12-07 02:24:31,217 | sd | DEBUG | webui | Scripts setup: ['ADetailer:0.021']
2023-12-07 02:24:31,220 | sd | DEBUG | sd_models | Model metadata: file="metadata.json" no changes
2023-12-07 02:24:31,221 | sd | DEBUG | webui | Model auto load disabled
2023-12-07 02:24:31,222 | sd | DEBUG | shared | Save: file="config.json" json=44 bytes=1752
2023-12-07 02:24:31,223 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 02:24:31,224 | sd | INFO | webui | Startup time: 21.39 { torch=3.27 gradio=1.32 diffusers=0.07 libraries=6.24 extensions=4.07 face-restore=0.55 extra-networks=0.41 ui-extra-networks=0.38 ui-img2img=0.06 ui-settings=0.29 ui-extensions=1.75 ui-defaults=0.06 launch=2.51 api=0.07 app-started=0.16 }
2023-12-07 02:24:52,127 | sd | INFO | api | MOTD: N/A
2023-12-07 02:25:01,234 | sd | DEBUG | theme | Themes: builtin=6 default=5 external=55
2023-12-07 02:25:02,169 | sd | INFO | api | Browser session: user=None client=172.31.16.117 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 Edg/119.0.0.0
2023-12-07 02:25:15,189 | sd | INFO | sd_models | Select: model="SD1.5\dreamshaper_8 [879db523c3]"
2023-12-07 02:25:15,190 | sd | DEBUG | sd_models | Load model weights: existing=False target=E:\automatic\models\Stable-diffusion\SD1.5\dreamshaper_8.safetensors info=None
2023-12-07 02:25:15,244 | sd | DEBUG | devices | Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False
2023-12-07 02:25:15,245 | sd | INFO | devices | Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=inference_mode fp16=True bf16=False
2023-12-07 02:25:15,247 | sd | INFO | sd_models | Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline file="E:\automatic\models\Stable-diffusion\SD1.5\dreamshaper_8.safetensors" size=2034MB
2023-12-07 02:25:16,863 | sd | DEBUG | sd_models | Setting model: pipeline=StableDiffusionPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'variant': 'fp16', 'extract_ema': True, 'force_zeros_for_empty_prompt ': True, 'requires_aesthetics_score': False, 'use_safetensors': True}
2023-12-07 02:25:16,865 | sd | DEBUG | sd_models | Setting model: enable VAE slicing
2023-12-07 02:25:16,867 | sd | DEBUG | sd_models | Setting model: enable VAE tiling
2023-12-07 02:25:16,871 | sd | DEBUG | sd_models | Setting model VAE: name=None upcast=True
2023-12-07 02:25:17,666 | sd | INFO | textual_inversion | Load embeddings: loaded=7 skipped=1 time=0.34
2023-12-07 02:25:17,933 | sd | DEBUG | devices | gc: collected=1600 device=cuda {'ram': {'used': 5.41, 'total': 31.92}, 'gpu': {'used': 3.27, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:25:17,940 | sd | INFO | sd_models | Load model: time=2.47 { load=2.47 } native=512 {'ram': {'used': 5.41, 'total': 31.92}, 'gpu': {'used': 3.27, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:25:35,312 | sd | DEBUG | ui_models | CivitAI search metadata: Embedding
2023-12-07 02:25:35,378 | sd | DEBUG | ui_extra_networks | Extra networks: page='embedding' items=7 subfolders=1 tab=txt2img folders=['models\\embeddings'] list=0.02 desc=0.01 info=0.04 workers=2
2023-12-07 02:25:35,380 | sd | DEBUG | ui_extra_networks | Refreshing Extra networks: page='Embedding' items=7 tab=txt2img
2023-12-07 02:25:45,623 | sd | DEBUG | txt2img | txt2img: id_task=task(m3gnq1a3ucq3rkb)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=20|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:25:45,626 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 02:25:46,053 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:25:49,081 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.78
2023-12-07 02:25:49,100 | sd | DEBUG | images | Saving: image="outputs\text\00223-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 02:25:49,105 | sd | INFO | processing | Processed: images=1 time=3.47 its=5.76 memory={'ram': {'used': 2.51, 'total': 31.92}, 'gpu': {'used': 3.43, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:25:52,717 | sd | DEBUG | txt2img | txt2img: id_task=task(ngnro93j0fesbmf)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=20|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:25:52,720 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 02:25:52,786 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:25:53,877 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.015
2023-12-07 02:25:53,949 | sd | DEBUG | images | Saving: image="outputs\text\00224-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 02:25:53,953 | sd | INFO | processing | Processed: images=1 time=1.23 its=16.27 memory={'ram': {'used': 2.51, 'total': 31.92}, 'gpu': {'used': 4.13, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:26:00,273 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=174 uptime=105 memory=1.9/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:26:05,228 | sd | DEBUG | txt2img | txt2img: id_task=task(g17mzp7qes433v0)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=20|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:26:05,231 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 02:26:05,394 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.16
2023-12-07 02:26:05,539 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:26:07,345 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 02:26:07,421 | sd | DEBUG | images | Saving: image="outputs\text\00225-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:26:07,426 | sd | INFO | processing | Processed: images=1 time=2.19 its=9.13 memory={'ram': {'used': 1.92, 'total': 31.92}, 'gpu': {'used': 4.05, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:28:00,328 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=193 uptime=225 memory=1.91/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:28:44,744 | sd | INFO | api | MOTD: N/A
2023-12-07 02:28:51,349 | sd | DEBUG | theme | Themes: builtin=6 default=5 external=55
2023-12-07 02:28:51,914 | sd | INFO | api | Browser session: user=None client=172.31.16.117 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 Edg/119.0.0.0
2023-12-07 02:30:00,391 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=239 uptime=345 memory=1.92/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:32:00,459 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=241 uptime=465 memory=1.92/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:33:59,515 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=241 uptime=585 memory=1.92/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:35:59,572 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=241 uptime=705 memory=1.92/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:37:59,633 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=241 uptime=825 memory=1.92/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:39:59,693 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=241 uptime=945 memory=1.92/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:41:46,451 | sd | INFO | api | MOTD: N/A
2023-12-07 02:41:52,756 | sd | DEBUG | theme | Themes: builtin=6 default=5 external=55
2023-12-07 02:41:53,023 | sd | INFO | api | Browser session: user=None client=172.31.10.139 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36
2023-12-07 02:41:59,771 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=328 uptime=1065 memory=1.92/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:43:59,832 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=389 uptime=1185 memory=1.93/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:45:59,899 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=413 uptime=1305 memory=1.93/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:46:35,389 | sd | DEBUG | shared | Save: file="config.json" json=43 bytes=1737
2023-12-07 02:46:35,390 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 02:46:35,391 | sd | INFO | ui | Settings: changed=5 ['cuda_compile', 'cuda_compile_vae', 'hypertile_unet_tile', 'diffusers_vae_slicing', 'diffusers_vae_tiling']
2023-12-07 02:46:42,630 | sd | DEBUG | shared | Save: file="config.json" json=43 bytes=1737
2023-12-07 02:46:42,631 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 02:46:47,316 | sd | DEBUG | generation_parameters_copypaste | Paste prompt: type="params" prompt="girl in field, <lora:add_detail:1>, 
Steps: 20, Seed: 2324062234, Sampler: Default, CFG scale: 6, Size: 512x512, Parser: Full parser, Model: dreamshaper_8, Model hash: 879db523c3, Backend: Diffusers, App: SD.Next, Version: 93f35cc, Operations: txt2img, Hypertile UNet: 368, Lora hashes: "add_detail: 7c6bad76eb54", Pipeline: StableDiffusionPipeline"
2023-12-07 02:46:47,319 | sd | DEBUG | generation_parameters_copypaste | Settings overrides: []
2023-12-07 02:47:16,934 | sd | DEBUG | txt2img | txt2img: id_task=task(bjh0bx69hcmze31)|prompt=girl in field, <lora:add_detail:1>,|negative_prompt=|prompt_styles=[]|steps=20|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:47:16,937 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:47:16,938 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:47:17,087 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.14
2023-12-07 02:47:17,234 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:47:21,125 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.78
2023-12-07 02:47:21,132 | sd | DEBUG | images | Saving: image="outputs\text\00000-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:47:21,137 | sd | INFO | processing | Processed: images=1 time=4.19 its=4.77 memory={'ram': {'used': 1.95, 'total': 31.92}, 'gpu': {'used': 3.52, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:47:49,316 | sd | DEBUG | txt2img | txt2img: id_task=task(azztnngxb3ysofa)|prompt=girl in field, <lora:add_detail:1>,|negative_prompt=|prompt_styles=[]|steps=20|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=2|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:47:49,319 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:47:49,320 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:47:49,475 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.15
2023-12-07 02:47:49,746 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([2, 77, 768]), 'negative_prompt_embeds': torch.Size([2, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:47:53,537 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=2 latents=torch.Size([2, 4, 64, 64]) time=0.022
2023-12-07 02:47:53,687 | sd | DEBUG | images | Saving: image="outputs\text\00001-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:47:53,697 | sd | DEBUG | images | Saving: image="outputs\text\00002-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:47:53,701 | sd | INFO | processing | Processed: images=2 time=4.38 its=9.14 memory={'ram': {'used': 1.96, 'total': 31.92}, 'gpu': {'used': 4.43, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:47:53,705 | sd | DEBUG | images | Saving: image="outputs\grids\00000-dreamshaper_8-girl in field lora add detail 1-grid.jpg" type=JPEG size=1024x512
2023-12-07 02:47:59,979 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=458 uptime=1425 memory=1.97/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:48:08,576 | sd | DEBUG | txt2img | txt2img: id_task=task(xcc4483qbgx8vbj)|prompt=girl in field, <lora:add_detail:1>,|negative_prompt=|prompt_styles=[]|steps=20|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:48:08,579 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:48:08,580 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:48:08,739 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.15
2023-12-07 02:48:09,257 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:48:16,274 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.809
2023-12-07 02:48:16,503 | sd | DEBUG | images | Saving: image="outputs\text\00003-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:48:16,513 | sd | DEBUG | images | Saving: image="outputs\text\00004-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:48:16,523 | sd | DEBUG | images | Saving: image="outputs\text\00005-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:48:16,533 | sd | DEBUG | images | Saving: image="outputs\text\00006-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:48:16,537 | sd | INFO | processing | Processed: images=4 time=7.95 its=10.06 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.36, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:48:16,542 | sd | DEBUG | images | Saving: image="outputs\grids\00001-dreamshaper_8-girl in field lora add detail 1-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:48:42,668 | sd | DEBUG | txt2img | txt2img: id_task=task(hiyo1vqvwhk45yq)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=20|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:48:42,672 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:48:42,673 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:48:42,921 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:48:47,828 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.08
2023-12-07 02:48:48,084 | sd | DEBUG | images | Saving: image="outputs\text\00007-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 02:48:48,093 | sd | DEBUG | images | Saving: image="outputs\text\00008-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 02:48:48,102 | sd | DEBUG | images | Saving: image="outputs\text\00009-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 02:48:48,111 | sd | DEBUG | images | Saving: image="outputs\text\00010-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 02:48:48,116 | sd | INFO | processing | Processed: images=4 time=5.44 its=14.71 memory={'ram': {'used': 2.0, 'total': 31.92}, 'gpu': {'used': 4.5, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:48:48,120 | sd | DEBUG | images | Saving: image="outputs\grids\00002-dreamshaper_8-girl in field-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:50:00,438 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=510 uptime=1545 memory=1.99/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:51:05,534 | sd | DEBUG | txt2img | txt2img: id_task=task(bf7kx2kop8tjk9j)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=30|sampler_index=13|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:51:05,537 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:51:05,538 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:51:05,787 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:51:05,789 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon'}
2023-12-07 02:51:12,875 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.789
2023-12-07 02:51:13,083 | sd | DEBUG | images | Saving: image="outputs\text\00011-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 02:51:13,092 | sd | DEBUG | images | Saving: image="outputs\text\00012-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 02:51:13,100 | sd | DEBUG | images | Saving: image="outputs\text\00013-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 02:51:13,107 | sd | DEBUG | images | Saving: image="outputs\text\00014-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 02:51:13,111 | sd | INFO | processing | Processed: images=4 time=7.57 its=15.86 memory={'ram': {'used': 2.06, 'total': 31.92}, 'gpu': {'used': 4.36, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:51:13,116 | sd | DEBUG | images | Saving: image="outputs\grids\00003-dreamshaper_8-girl in field-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:51:59,504 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=554 uptime=1665 memory=2.06/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:52:00,192 | sd | DEBUG | txt2img | txt2img: id_task=task(9sq1a7h39996q6o)|prompt=girl in field, <lora:add_detail:1>,|negative_prompt=|prompt_styles=[]|steps=30|sampler_index=13|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:52:00,195 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:52:00,196 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:52:00,358 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.16
2023-12-07 02:52:00,866 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:52:00,868 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon'}
2023-12-07 02:52:09,956 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.842
2023-12-07 02:52:10,171 | sd | DEBUG | images | Saving: image="outputs\text\00015-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:52:10,181 | sd | DEBUG | images | Saving: image="outputs\text\00016-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:52:10,189 | sd | DEBUG | images | Saving: image="outputs\text\00017-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:52:10,199 | sd | DEBUG | images | Saving: image="outputs\text\00018-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 02:52:10,202 | sd | INFO | processing | Processed: images=4 time=10.00 its=12.00 memory={'ram': {'used': 2.08, 'total': 31.92}, 'gpu': {'used': 4.43, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:52:10,206 | sd | DEBUG | images | Saving: image="outputs\grids\00004-dreamshaper_8-girl in field lora add detail 1-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:52:37,587 | sd | DEBUG | shared | Save: file="config.json" json=44 bytes=1775
2023-12-07 02:52:37,589 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 02:52:37,590 | sd | INFO | ui | Settings: changed=1 ['extra_networks_sidebar_width']
2023-12-07 02:53:34,951 | sd | DEBUG | shared | Save: file="config.json" json=46 bytes=1847
2023-12-07 02:53:34,952 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 02:53:34,953 | sd | INFO | ui | Settings: changed=2 ['extra_networks_card_cover', 'extra_networks_height']
2023-12-07 02:53:59,580 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=639 uptime=1785 memory=2.06/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:54:51,511 | sd | DEBUG | txt2img | txt2img: id_task=task(h0b61uttl8pr99m)|prompt=gothgal in field, <lora:add_detail:1>, <lora:edgGothGal_MINI:1.0>|negative_prompt=|prompt_styles=[]|steps=30|sampler_index=13|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:54:51,514 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:54:51,515 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:54:51,867 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'edgGothGal_MINI'] patch=0.00 load=0.35
2023-12-07 02:54:52,635 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:54:52,636 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon'}
2023-12-07 02:55:03,563 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.855
2023-12-07 02:55:03,790 | sd | DEBUG | images | Saving: image="outputs\text\00019-dreamshaper_8-gothgal in field lora add detail 1 edgGothGal.jpg" type=JPEG size=512x512
2023-12-07 02:55:03,800 | sd | DEBUG | images | Saving: image="outputs\text\00020-dreamshaper_8-gothgal in field lora add detail 1 edgGothGal.jpg" type=JPEG size=512x512
2023-12-07 02:55:03,808 | sd | DEBUG | images | Saving: image="outputs\text\00021-dreamshaper_8-gothgal in field lora add detail 1 edgGothGal.jpg" type=JPEG size=512x512
2023-12-07 02:55:03,816 | sd | DEBUG | images | Saving: image="outputs\text\00022-dreamshaper_8-gothgal in field lora add detail 1 edgGothGal.jpg" type=JPEG size=512x512
2023-12-07 02:55:03,821 | sd | INFO | processing | Processed: images=4 time=12.30 its=9.76 memory={'ram': {'used': 2.08, 'total': 31.92}, 'gpu': {'used': 4.44, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:55:03,825 | sd | DEBUG | images | Saving: image="outputs\grids\00005-dreamshaper_8-gothgal in field lora add detail 1 edgGothGal-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:55:40,481 | sd | DEBUG | txt2img | txt2img: id_task=task(elgtwswtq7bc628)|prompt=gothgal in field, <lora:add_detail:1>, <lora:edgGothGal_MINI:1.0>|negative_prompt=|prompt_styles=[]|steps=30|sampler_index=13|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:55:40,485 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:55:40,485 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:55:40,731 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'edgGothGal_MINI'] patch=0.00 load=0.24
2023-12-07 02:55:41,464 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:55:41,467 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon'}
2023-12-07 02:55:51,911 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.808
2023-12-07 02:55:52,128 | sd | DEBUG | images | Saving: image="outputs\text\00023-dreamshaper_8-gothgal in field lora add detail 1 edgGothGal.jpg" type=JPEG size=512x512
2023-12-07 02:55:52,139 | sd | DEBUG | images | Saving: image="outputs\text\00024-dreamshaper_8-gothgal in field lora add detail 1 edgGothGal.jpg" type=JPEG size=512x512
2023-12-07 02:55:52,148 | sd | DEBUG | images | Saving: image="outputs\text\00025-dreamshaper_8-gothgal in field lora add detail 1 edgGothGal.jpg" type=JPEG size=512x512
2023-12-07 02:55:52,157 | sd | DEBUG | images | Saving: image="outputs\text\00026-dreamshaper_8-gothgal in field lora add detail 1 edgGothGal.jpg" type=JPEG size=512x512
2023-12-07 02:55:52,161 | sd | INFO | processing | Processed: images=4 time=11.67 its=10.28 memory={'ram': {'used': 2.09, 'total': 31.92}, 'gpu': {'used': 4.52, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:55:52,165 | sd | DEBUG | images | Saving: image="outputs\grids\00006-dreamshaper_8-gothgal in field lora add detail 1 edgGothGal-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:55:59,642 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=702 uptime=1905 memory=2.09/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:56:05,593 | sd | DEBUG | txt2img | txt2img: id_task=task(zut43dp3ebzgkjh)|prompt=gothgal in field, <lora:add_detail:0.8>, <lora:edgGothGal_MINI:0.7>|negative_prompt=|prompt_styles=[]|steps=30|sampler_index=13|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:56:05,597 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:56:05,598 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:56:05,824 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'edgGothGal_MINI'] patch=0.00 load=0.22
2023-12-07 02:56:06,544 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:56:06,546 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon'}
2023-12-07 02:56:14,716 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.078
2023-12-07 02:56:14,963 | sd | DEBUG | images | Saving: image="outputs\text\00027-dreamshaper_8-gothgal in field add detail edgGothGal MINI.jpg" type=JPEG size=512x512
2023-12-07 02:56:14,974 | sd | DEBUG | images | Saving: image="outputs\text\00028-dreamshaper_8-gothgal in field add detail edgGothGal MINI.jpg" type=JPEG size=512x512
2023-12-07 02:56:14,981 | sd | DEBUG | images | Saving: image="outputs\text\00029-dreamshaper_8-gothgal in field add detail edgGothGal MINI.jpg" type=JPEG size=512x512
2023-12-07 02:56:14,990 | sd | DEBUG | images | Saving: image="outputs\text\00030-dreamshaper_8-gothgal in field add detail edgGothGal MINI.jpg" type=JPEG size=512x512
2023-12-07 02:56:14,994 | sd | INFO | processing | Processed: images=4 time=9.39 its=12.78 memory={'ram': {'used': 2.09, 'total': 31.92}, 'gpu': {'used': 4.74, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:56:14,999 | sd | DEBUG | images | Saving: image="outputs\grids\00007-dreamshaper_8-gothgal in field add detail edgGothGal MINI-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:56:28,698 | sd | DEBUG | shared | Save: file="config.json" json=47 bytes=1876
2023-12-07 02:56:28,699 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 02:56:28,700 | sd | INFO | ui | Settings: changed=1 ['lora_in_memory_limit']
2023-12-07 02:56:30,517 | sd | DEBUG | txt2img | txt2img: id_task=task(oq09lim5qkvk9f2)|prompt=gothgal in field, <lora:add_detail:0.8>, <lora:edgGothGal_MINI:0.7>|negative_prompt=|prompt_styles=[]|steps=30|sampler_index=13|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:56:30,520 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:56:30,521 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:56:30,747 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'edgGothGal_MINI'] patch=0.00 load=0.22
2023-12-07 02:56:31,460 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:56:31,463 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon'}
2023-12-07 02:56:42,142 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.811
2023-12-07 02:56:42,367 | sd | DEBUG | images | Saving: image="outputs\text\00031-dreamshaper_8-gothgal in field add detail edgGothGal MINI.jpg" type=JPEG size=512x512
2023-12-07 02:56:42,377 | sd | DEBUG | images | Saving: image="outputs\text\00032-dreamshaper_8-gothgal in field add detail edgGothGal MINI.jpg" type=JPEG size=512x512
2023-12-07 02:56:42,387 | sd | DEBUG | images | Saving: image="outputs\text\00033-dreamshaper_8-gothgal in field add detail edgGothGal MINI.jpg" type=JPEG size=512x512
2023-12-07 02:56:42,396 | sd | DEBUG | images | Saving: image="outputs\text\00034-dreamshaper_8-gothgal in field add detail edgGothGal MINI.jpg" type=JPEG size=512x512
2023-12-07 02:56:42,401 | sd | INFO | processing | Processed: images=4 time=11.88 its=10.10 memory={'ram': {'used': 2.1, 'total': 31.92}, 'gpu': {'used': 4.6, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:56:42,405 | sd | DEBUG | images | Saving: image="outputs\grids\00008-dreamshaper_8-gothgal in field add detail edgGothGal MINI-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:57:00,150 | sd | DEBUG | txt2img | txt2img: id_task=task(8nyev9kw3n433ri)|prompt=gothgal in field, |negative_prompt=|prompt_styles=[]|steps=30|sampler_index=13|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:57:00,153 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:57:00,154 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:57:00,403 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:57:00,406 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon'}
2023-12-07 02:57:07,354 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.78
2023-12-07 02:57:07,573 | sd | DEBUG | images | Saving: image="outputs\text\00035-dreamshaper_8-gothgal in field.jpg" type=JPEG size=512x512
2023-12-07 02:57:07,582 | sd | DEBUG | images | Saving: image="outputs\text\00036-dreamshaper_8-gothgal in field.jpg" type=JPEG size=512x512
2023-12-07 02:57:07,590 | sd | DEBUG | images | Saving: image="outputs\text\00037-dreamshaper_8-gothgal in field.jpg" type=JPEG size=512x512
2023-12-07 02:57:07,598 | sd | DEBUG | images | Saving: image="outputs\text\00038-dreamshaper_8-gothgal in field.jpg" type=JPEG size=512x512
2023-12-07 02:57:07,603 | sd | INFO | processing | Processed: images=4 time=7.44 its=16.12 memory={'ram': {'used': 2.09, 'total': 31.92}, 'gpu': {'used': 4.57, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:57:07,606 | sd | DEBUG | images | Saving: image="outputs\grids\00009-dreamshaper_8-gothgal in field-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:57:59,703 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=776 uptime=2025 memory=2.07/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 02:58:34,322 | sd | DEBUG | txt2img | txt2img: id_task=task(aji1clqzrvrqhpx)|prompt=cyborg gothgal in field, dark theme, <lora:add_detail:0.8>, <lora:edgGothGal_MINI:0.7> <lora:LowRA:1.0> <lora:Futuristicbot4:1.0>|negative_prompt=|prompt_styles=[]|steps=30|sampler_index=13|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:58:34,326 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:58:34,327 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:58:36,823 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'edgGothGal_MINI', 'LowRA', 'Futuristicbot4'] patch=0.00 load=2.49
2023-12-07 02:58:38,006 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:58:38,009 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon'}
2023-12-07 02:58:52,326 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.778
2023-12-07 02:58:52,544 | sd | DEBUG | images | Saving: image="outputs\text\00039-dreamshaper_8-cyborg gothgal in field dark theme add detail.jpg" type=JPEG size=512x512
2023-12-07 02:58:52,553 | sd | DEBUG | images | Saving: image="outputs\text\00040-dreamshaper_8-cyborg gothgal in field dark theme add detail.jpg" type=JPEG size=512x512
2023-12-07 02:58:52,563 | sd | DEBUG | images | Saving: image="outputs\text\00041-dreamshaper_8-cyborg gothgal in field dark theme add detail.jpg" type=JPEG size=512x512
2023-12-07 02:58:52,571 | sd | DEBUG | images | Saving: image="outputs\text\00042-dreamshaper_8-cyborg gothgal in field dark theme add detail.jpg" type=JPEG size=512x512
2023-12-07 02:58:52,577 | sd | INFO | processing | Processed: images=4 time=18.25 its=6.58 memory={'ram': {'used': 2.12, 'total': 31.92}, 'gpu': {'used': 4.76, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:58:52,581 | sd | DEBUG | images | Saving: image="outputs\grids\00010-dreamshaper_8-cyborg gothgal in field dark theme add detail-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:59:20,676 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 02:59:22,353 | sd | DEBUG | txt2img | txt2img: id_task=task(9bf6xbtb21fxanh)|prompt=cyborg gothgal in field, dark theme, <lora:add_detail:0.3>, <lora:edgGothGal_MINI:0.7> <lora:LowRA:0.5> <lora:Futuristicbot4:0.6>|negative_prompt=|prompt_styles=[]|steps=30|sampler_index=13|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=4|cfg_scale=6|clip_skip=1|seed=2324062234.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 02:59:22,356 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 02:59:22,357 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=256
2023-12-07 02:59:22,575 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'edgGothGal_MINI', 'LowRA', 'Futuristicbot4'] patch=0.00 load=0.21
2023-12-07 02:59:23,708 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([4, 77, 768]), 'negative_prompt_embeds': torch.Size([4, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 02:59:23,710 | sd | DEBUG | sd_samplers | Sampler: sampler="Euler a" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon'}
2023-12-07 02:59:37,956 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=4 latents=torch.Size([4, 4, 64, 64]) time=0.777
2023-12-07 02:59:38,174 | sd | DEBUG | images | Saving: image="outputs\text\00043-dreamshaper_8-cyborg gothgal in field dark theme add detail.jpg" type=JPEG size=512x512
2023-12-07 02:59:38,184 | sd | DEBUG | images | Saving: image="outputs\text\00044-dreamshaper_8-cyborg gothgal in field dark theme add detail.jpg" type=JPEG size=512x512
2023-12-07 02:59:38,191 | sd | DEBUG | images | Saving: image="outputs\text\00045-dreamshaper_8-cyborg gothgal in field dark theme add detail.jpg" type=JPEG size=512x512
2023-12-07 02:59:38,200 | sd | DEBUG | images | Saving: image="outputs\text\00046-dreamshaper_8-cyborg gothgal in field dark theme add detail.jpg" type=JPEG size=512x512
2023-12-07 02:59:38,204 | sd | INFO | processing | Processed: images=4 time=15.84 its=7.57 memory={'ram': {'used': 2.11, 'total': 31.92}, 'gpu': {'used': 4.5, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 02:59:38,208 | sd | DEBUG | images | Saving: image="outputs\grids\00011-dreamshaper_8-cyborg gothgal in field dark theme add detail-grid.jpg" type=JPEG size=1024x1024
2023-12-07 02:59:59,889 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=849 uptime=2145 memory=2.09/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 03:01:02,133 | sd | INFO | webui | Exiting

Backend

Diffusers

Branch

Master

Model

SD 1.5

Acknowledgements

vladmandic commented 7 months ago
devils-shadow commented 7 months ago

as per your instructions here's what I did, taking into consideration the warmup period: gen parameters: diffusers backend, prompt girl in field, 50 steps, sd1.5/dreamshaper, all other settings default, using xformers 1st run, 5 single image runs, 50 steps, prompt only, no lora 2nd run, 5 single image runs, 50 steps, prompt + 1 lora 3rd run, 5 single image runs, 50 steps, prompt + 2 lora 4th run, 5 single image runs, 50 steps, prompt + 3 lora 5th run, 5 single image runs, 50 steps, prompt + 4 lora

below are the average times for these runs image

based on your explanation, or at least what I understand of it, slowdown is normal due to the new way loras are now handled. In this case, would the merge/unmerge overhead be bigger or smaller than, for example, going from an average of 3seconds per generation (with no lora) to an average of 11 seconds with 4 loras?

and here's a screenshot of the ui/settings used chrome_5ZRSP6o6j5

below is the log for the generations averaged above

2023-12-07 19:26:35,228 | sd | INFO | launch | Starting SD.Next
2023-12-07 19:26:35,231 | sd | INFO | installer | Logger: file="E:\automatic\sdnext.log" level=INFO size=1768219 mode=append
2023-12-07 19:26:35,232 | sd | INFO | installer | Python 3.10.11 on Windows
2023-12-07 19:26:35,234 | sd | WARNING | installer | Running GIT reset
2023-12-07 19:26:37,954 | sd | INFO | installer | GIT reset complete
2023-12-07 19:26:38,076 | sd | INFO | installer | Version: app=sd.next updated=2023-12-04 hash=93f35ccf url=https://github.com/vladmandic/automatic/tree/master
2023-12-07 19:26:38,485 | sd | INFO | launch | Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows release=Windows-10-10.0.22631-SP0 python=3.10.11
2023-12-07 19:26:38,487 | sd | DEBUG | installer | Setting environment tuning
2023-12-07 19:26:38,487 | sd | DEBUG | installer | Cache folder: C:\Users\devil\.cache\huggingface\hub
2023-12-07 19:26:38,487 | sd | DEBUG | installer | Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
2023-12-07 19:26:38,487 | sd | DEBUG | installer | Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
2023-12-07 19:26:38,488 | sd | INFO | installer | nVidia CUDA toolkit detected: nvidia-smi present
2023-12-07 19:26:44,813 | sd | INFO | launch | Startup: standard
2023-12-07 19:26:44,814 | sd | INFO | installer | Verifying requirements
2023-12-07 19:26:44,822 | sd | INFO | installer | Verifying packages
2023-12-07 19:26:44,824 | sd | INFO | installer | Verifying submodules
2023-12-07 19:26:46,495 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-chainner / main
2023-12-07 19:26:47,212 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-system-info / main
2023-12-07 19:26:47,893 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-agent-scheduler / main
2023-12-07 19:26:48,606 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-controlnet / main
2023-12-07 19:26:49,308 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main
2023-12-07 19:26:50,008 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
2023-12-07 19:26:50,710 | sd | DEBUG | installer | Submodule: modules/k-diffusion / master
2023-12-07 19:26:51,629 | sd | DEBUG | installer | Submodule: modules/lora / main
2023-12-07 19:26:52,326 | sd | DEBUG | installer | Submodule: wiki / master
2023-12-07 19:26:52,988 | sd | DEBUG | paths | Register paths
2023-12-07 19:26:53,093 | sd | DEBUG | installer | Installed packages: 226
2023-12-07 19:26:53,093 | sd | DEBUG | installer | Extensions all: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
2023-12-07 19:26:53,150 | sd | DEBUG | installer | Submodule: extensions-builtin\clip-interrogator-ext / main
2023-12-07 19:26:53,790 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\clip-interrogator-ext\install.py
2023-12-07 19:26:59,543 | sd | DEBUG | installer | Submodule: extensions-builtin\sd-extension-chainner / main
2023-12-07 19:27:00,539 | sd | DEBUG | installer | Submodule: extensions-builtin\sd-extension-system-info / main
2023-12-07 19:27:01,162 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\sd-extension-system-info\install.py
2023-12-07 19:27:01,555 | sd | DEBUG | installer | Submodule: extensions-builtin\sd-webui-agent-scheduler / main
2023-12-07 19:27:02,201 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
2023-12-07 19:27:02,596 | sd | DEBUG | installer | Submodule: extensions-builtin\stable-diffusion-webui-images-browser / main
2023-12-07 19:27:03,235 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py
2023-12-07 19:27:03,633 | sd | DEBUG | installer | Submodule: extensions-builtin\stable-diffusion-webui-rembg / master
2023-12-07 19:27:04,273 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
2023-12-07 19:27:04,614 | sd | DEBUG | installer | Extensions all: ['a1111-sd-webui-tagcomplete', 'adetailer', 'stable-diffusion-webui-wildcards', 'ultimate-upscale-for-automatic1111']
2023-12-07 19:27:04,669 | sd | DEBUG | installer | Submodule: extensions\a1111-sd-webui-tagcomplete / main
2023-12-07 19:27:05,448 | sd | DEBUG | installer | Submodule: extensions\adetailer / main
2023-12-07 19:27:06,104 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions\adetailer\install.py
2023-12-07 19:27:06,537 | sd | DEBUG | installer | Submodule: extensions\stable-diffusion-webui-wildcards / master
2023-12-07 19:27:07,316 | sd | DEBUG | installer | Submodule: extensions\ultimate-upscale-for-automatic1111 / master
2023-12-07 19:27:08,037 | sd | INFO | installer | Extensions enabled: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'a1111-sd-webui-tagcomplete', 'adetailer', 'stable-diffusion-webui-wildcards', 'ultimate-upscale-for-automatic1111']
2023-12-07 19:27:08,038 | sd | INFO | installer | Verifying requirements
2023-12-07 19:27:08,043 | sd | INFO | installer | Updating Wiki
2023-12-07 19:27:08,097 | sd | DEBUG | installer | Submodule: E:\automatic\wiki / master
2023-12-07 19:27:08,746 | sd | DEBUG | launch | Setup complete without errors: 1701970029
2023-12-07 19:27:08,755 | sd | INFO | installer | Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
2023-12-07 19:27:08,756 | sd | DEBUG | launch | Starting module: <module 'webui' from 'E:\\automatic\\webui.py'>
2023-12-07 19:27:08,757 | sd | INFO | launch | Command line args: ['--reset', '--upgrade'] reset=True upgrade=True
2023-12-07 19:27:13,106 | sd | INFO | loader | Load packages: torch=2.1.1+cu121 diffusers=0.24.0 gradio=3.43.2
2023-12-07 19:27:13,880 | sd | DEBUG | shared | Read: file="config.json" json=51 bytes=2098
2023-12-07 19:27:13,880 | sd | DEBUG | shared | Unknown settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 19:27:13,881 | sd | INFO | shared | Engine: backend=Backend.DIFFUSERS compute=cuda mode=no_grad device=cuda cross-optimization="xFormers"
2023-12-07 19:27:13,925 | sd | INFO | shared | Device: device=NVIDIA GeForce RTX 4070 n=1 arch=sm_90 cap=(8, 9) cuda=12.1 cudnn=8801 driver=546.29
2023-12-07 19:27:22,161 | sd | DEBUG | webui | Entering start sequence
2023-12-07 19:27:22,164 | sd | DEBUG | webui | Initializing
2023-12-07 19:27:22,166 | sd | INFO | sd_vae | Available VAEs: path="models\VAE" items=6
2023-12-07 19:27:22,168 | sd | INFO | shared | Disabling uncompatible extensions: backend=Backend.DIFFUSERS ['a1111-sd-webui-lycoris', 'sd-webui-animatediff']
2023-12-07 19:27:22,171 | sd | DEBUG | modelloader | Scanning diffusers cache: models\Diffusers models\Diffusers items=1 time=0.00
2023-12-07 19:27:22,176 | sd | DEBUG | shared | Read: file="cache.json" json=2 bytes=8090
2023-12-07 19:27:22,182 | sd | DEBUG | shared | Read: file="metadata.json" json=111 bytes=144812
2023-12-07 19:27:22,188 | sd | INFO | sd_models | Available models: path="models\Stable-diffusion" items=20 time=0.02
2023-12-07 19:27:22,856 | sd | DEBUG | webui | Load extensions
2023-12-07 19:27:24,130 | sd | INFO | script_loading | Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
2023-12-07 19:27:26,140 | sd | INFO | script_loading | Extension: script='extensions\a1111-sd-webui-tagcomplete\scripts\tag_autocomplete_helper.py' Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
2023-12-07 19:27:27,026 | sd | INFO | script_loading | Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized. version: 23.11.1, num models: 9
2023-12-07 19:27:27,035 | sd | INFO | webui | Extensions time: 4.18 { clip-interrogator-ext=0.72 Lora=0.05 sd-extension-chainner=0.07 sd-webui-agent-scheduler=0.40 stable-diffusion-webui-images-browser=0.15 stable-diffusion-webui-rembg=1.82 adetailer=0.88 }
2023-12-07 19:27:27,076 | sd | DEBUG | shared | Read: file="html/upscalers.json" json=4 bytes=2672
2023-12-07 19:27:27,080 | sd | DEBUG | shared | Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719
2023-12-07 19:27:27,081 | sd | DEBUG | chainner_model | chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=5
2023-12-07 19:27:27,085 | sd | DEBUG | modelloader | Load upscalers: total=52 downloaded=15 user=0 time=0.05 ['None', 'Lanczos', 'Nearest', 'ChaiNNer', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
2023-12-07 19:27:27,506 | sd | DEBUG | styles | Load styles: folder="models\styles" items=289 time=0.42
2023-12-07 19:27:27,512 | sd | DEBUG | webui | Creating UI
2023-12-07 19:27:27,732 | sd | INFO | theme | Load UI theme: name="black-teal" style=Auto base=sdnext.css
2023-12-07 19:27:27,837 | sd | DEBUG | shared | Read: file="html\reference.json" json=18 bytes=11012
2023-12-07 19:27:28,036 | sd | DEBUG | ui_extra_networks | Extra networks: page='model' items=38 subfolders=5 tab=txt2img folders=['models\\Stable-diffusion', 'models\\Diffusers', 'models\\Reference', 'E:\\automatic\\models\\Stable-diffusion'] list=0.09 desc=0.02 info=0.13 workers=2
2023-12-07 19:27:28,052 | sd | DEBUG | ui_extra_networks | Extra networks: page='style' items=289 subfolders=2 tab=txt2img folders=['models\\styles', 'html'] list=0.02 desc=0.00 info=0.00 workers=2
2023-12-07 19:27:28,054 | sd | DEBUG | ui_extra_networks | Extra networks: page='embedding' items=11 subfolders=1 tab=txt2img folders=['models\\embeddings'] list=0.04 desc=0.00 info=0.05 workers=2
2023-12-07 19:27:28,054 | sd | DEBUG | ui_extra_networks | Extra networks: page='hypernetwork' items=0 subfolders=1 tab=txt2img folders=['models\\hypernetworks'] list=0.00 desc=0.00 info=0.00 workers=2
2023-12-07 19:27:28,055 | sd | DEBUG | ui_extra_networks | Extra networks: page='vae' items=6 subfolders=1 tab=txt2img folders=['models\\VAE'] list=0.03 desc=0.00 info=0.02 workers=2
2023-12-07 19:27:28,058 | sd | DEBUG | ui_extra_networks | Extra networks: page='lora' items=38 subfolders=1 tab=txt2img folders=['models\\Lora', 'models\\LyCORIS'] list=0.11 desc=0.01 info=0.18 workers=2
2023-12-07 19:27:28,228 | sd | DEBUG | shared | Read: file="ui-config.json" json=0 bytes=2
2023-12-07 19:27:28,547 | sd | DEBUG | theme | Themes: builtin=6 default=5 external=55
2023-12-07 19:27:29,955 | sd | DEBUG | script_callbacks | Script: 1.31 ui_tabs E:\automatic\extensions-builtin\stable-diffusion-webui-images-browser\scripts\image_browser.py
2023-12-07 19:27:29,964 | sd | DEBUG | shared | Read: file="E:\automatic\html\extensions.json" json=328 bytes=191889
2023-12-07 19:27:30,947 | sd | DEBUG | ui_extensions | Extension list: processed=312 installed=13 enabled=11 disabled=2 visible=312 hidden=0
2023-12-07 19:27:31,248 | sd | INFO | webui | Local URL: http://127.0.0.1:7860/
2023-12-07 19:27:31,249 | sd | DEBUG | webui | Gradio functions: registered=1751
2023-12-07 19:27:31,250 | sd | INFO | middleware | Initializing middleware
2023-12-07 19:27:31,255 | sd | DEBUG | webui | Creating API
2023-12-07 19:27:31,397 | sd | INFO | task_runner | [AgentScheduler] Task queue is empty
2023-12-07 19:27:31,398 | sd | INFO | api | [AgentScheduler] Registering APIs
2023-12-07 19:27:31,512 | sd | DEBUG | webui | Scripts setup: ['X/Y/Z Grid:0.006', 'ADetailer:0.02']
2023-12-07 19:27:31,512 | sd | DEBUG | sd_models | Model metadata: file="metadata.json" no changes
2023-12-07 19:27:31,512 | sd | DEBUG | webui | Model auto load disabled
2023-12-07 19:27:31,513 | sd | DEBUG | shared | Save: file="config.json" json=51 bytes=2033
2023-12-07 19:27:31,513 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 19:27:31,513 | sd | INFO | webui | Startup time: 22.75 { torch=2.40 gradio=1.85 diffusers=0.09 libraries=9.06 extensions=4.18 face-restore=0.67 extra-networks=0.43 ui-extra-networks=0.55 ui-img2img=0.06 ui-settings=0.40 ui-extensions=2.35 ui-defaults=0.06 launch=0.23 api=0.08 app-started=0.18 }
2023-12-07 19:27:59,532 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=2 uptime=46 memory=1.12/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 19:28:04,958 | sd | INFO | api | MOTD: N/A
2023-12-07 19:28:07,280 | sd | DEBUG | theme | Themes: builtin=6 default=5 external=55
2023-12-07 19:28:07,390 | sd | INFO | api | Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36
2023-12-07 19:28:13,950 | sd | DEBUG | modelloader | Scanning diffusers cache: models\Diffusers models\Diffusers items=1 time=0.00
2023-12-07 19:28:13,952 | sd | INFO | sd_models | Available models: path="models\Stable-diffusion" items=20 time=0.00
2023-12-07 19:28:23,192 | sd | INFO | sd_models | Select: model="SD1.5\dreamshaper_8 [879db523c3]"
2023-12-07 19:28:23,194 | sd | DEBUG | sd_models | Load model weights: existing=False target=E:\automatic\models\Stable-diffusion\SD1.5\dreamshaper_8.safetensors info=None
2023-12-07 19:28:23,524 | sd | DEBUG | devices | Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False
2023-12-07 19:28:23,524 | sd | INFO | devices | Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=inference_mode fp16=True bf16=False
2023-12-07 19:28:23,526 | sd | INFO | sd_models | Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline file="E:\automatic\models\Stable-diffusion\SD1.5\dreamshaper_8.safetensors" size=2034MB
2023-12-07 19:28:31,675 | sd | DEBUG | sd_models | Setting model: pipeline=StableDiffusionPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'variant': 'fp16', 'extract_ema': True, 'force_zeros_for_empty_prompt ': True, 'requires_aesthetics_score': False, 'use_safetensors': True}
2023-12-07 19:28:31,680 | sd | DEBUG | sd_models | Setting model VAE: name=None upcast=True
2023-12-07 19:28:32,839 | sd | INFO | textual_inversion | Load embeddings: loaded=10 skipped=1 time=0.61
2023-12-07 19:28:33,110 | sd | DEBUG | devices | gc: collected=1419 device=cuda {'ram': {'used': 4.55, 'total': 31.92}, 'gpu': {'used': 3.3, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:28:33,113 | sd | INFO | sd_models | Load model: time=9.65 { load=9.64 } native=512 {'ram': {'used': 4.55, 'total': 31.92}, 'gpu': {'used': 3.3, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:28:33,117 | sd | DEBUG | shared | Save: file="config.json" json=50 bytes=1991
2023-12-07 19:28:33,118 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 19:28:33,118 | sd | INFO | ui | Settings: changed=1 ['sd_vae']
2023-12-07 19:28:49,472 | sd | DEBUG | txt2img | txt2img: id_task=task(v3jmyo3z5xupx33)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:28:49,473 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:28:49,475 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:28:50,096 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:28:55,642 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.855
2023-12-07 19:28:55,660 | sd | DEBUG | images | Saving: image="outputs\text\00623-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 19:28:55,664 | sd | INFO | processing | Processed: images=1 time=6.16 its=8.11 memory={'ram': {'used': 1.95, 'total': 31.92}, 'gpu': {'used': 3.4, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:02,144 | sd | DEBUG | txt2img | txt2img: id_task=task(15x1kbu7sp0zw8c)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:02,145 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:02,146 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:02,214 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:06,563 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.749
2023-12-07 19:29:06,575 | sd | DEBUG | images | Saving: image="outputs\text\00624-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 19:29:06,578 | sd | INFO | processing | Processed: images=1 time=4.43 its=11.29 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 3.48, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:07,239 | sd | DEBUG | txt2img | txt2img: id_task=task(deob5k1crsrwrgp)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:07,240 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:07,241 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:07,309 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:09,955 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.011
2023-12-07 19:29:10,030 | sd | DEBUG | images | Saving: image="outputs\text\00625-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 19:29:10,033 | sd | INFO | processing | Processed: images=1 time=2.79 its=17.94 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 4.16, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:12,526 | sd | DEBUG | txt2img | txt2img: id_task=task(ojneu6edz0v3wn9)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:12,526 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:12,528 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:12,594 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:15,259 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.01
2023-12-07 19:29:15,337 | sd | DEBUG | images | Saving: image="outputs\text\00626-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 19:29:15,341 | sd | INFO | processing | Processed: images=1 time=2.81 its=17.81 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 4.16, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:15,995 | sd | DEBUG | txt2img | txt2img: id_task=task(h6l2ic605tfyw1l)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:15,996 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:15,997 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:16,065 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:18,697 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.009
2023-12-07 19:29:18,775 | sd | DEBUG | images | Saving: image="outputs\text\00627-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 19:29:18,779 | sd | INFO | processing | Processed: images=1 time=2.78 its=18.01 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 4.16, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:28,995 | sd | DEBUG | txt2img | txt2img: id_task=task(e8vk4guvsk0iamc)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:28,995 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:28,997 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:29,717 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.71
2023-12-07 19:29:29,860 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:36,059 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.758
2023-12-07 19:29:36,067 | sd | DEBUG | images | Saving: image="outputs\text\00628-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 19:29:36,071 | sd | INFO | processing | Processed: images=1 time=7.07 its=7.07 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 3.52, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:36,599 | sd | DEBUG | txt2img | txt2img: id_task=task(smgo0c64sk0hgiu)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:36,599 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:36,601 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:36,608 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.00
2023-12-07 19:29:36,736 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:41,071 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 19:29:41,147 | sd | DEBUG | images | Saving: image="outputs\text\00629-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 19:29:41,151 | sd | INFO | processing | Processed: images=1 time=4.54 its=11.01 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 4.2, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:42,272 | sd | DEBUG | txt2img | txt2img: id_task=task(lj8yy3ghao2ebb1)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:42,272 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:42,273 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:42,280 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.00
2023-12-07 19:29:42,402 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:48,499 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.776
2023-12-07 19:29:48,509 | sd | DEBUG | images | Saving: image="outputs\text\00630-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 19:29:48,513 | sd | INFO | processing | Processed: images=1 time=6.24 its=8.02 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 3.58, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:49,040 | sd | DEBUG | txt2img | txt2img: id_task=task(pk6m7ur3gzsjkej)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:49,041 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:49,042 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:49,048 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.00
2023-12-07 19:29:49,181 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:53,641 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 19:29:53,717 | sd | DEBUG | images | Saving: image="outputs\text\00631-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 19:29:53,721 | sd | INFO | processing | Processed: images=1 time=4.68 its=10.70 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:55,431 | sd | DEBUG | txt2img | txt2img: id_task=task(g98eqj1uzs44vhl)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:55,431 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:55,433 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:55,438 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.00
2023-12-07 19:29:55,558 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:59,596 | sd | DEBUG | launch | Server: alive=True jobs=1 requests=352 uptime=166 memory=1.98/31.92 backend=Backend.DIFFUSERS state=job="run_settings" 0/-1
2023-12-07 19:30:01,794 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.82
2023-12-07 19:30:01,803 | sd | DEBUG | images | Saving: image="outputs\text\00632-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 19:30:01,808 | sd | INFO | processing | Processed: images=1 time=6.37 its=7.85 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 3.57, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:09,445 | sd | DEBUG | txt2img | txt2img: id_task=task(mve8mu8tognhqcz)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:09,445 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:09,447 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:09,557 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details'] patch=0.00 load=0.11
2023-12-07 19:30:09,754 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:30:16,126 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 19:30:16,201 | sd | DEBUG | images | Saving: image="outputs\text\00633-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:30:16,205 | sd | INFO | processing | Processed: images=1 time=6.75 its=7.40 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:17,102 | sd | DEBUG | txt2img | txt2img: id_task=task(hoqxdeegle1hyhv)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:17,103 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:17,104 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:17,110 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details'] patch=0.00 load=0.00
2023-12-07 19:30:17,291 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:30:23,526 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.01
2023-12-07 19:30:23,603 | sd | DEBUG | images | Saving: image="outputs\text\00634-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:30:23,607 | sd | INFO | processing | Processed: images=1 time=6.50 its=7.69 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:24,662 | sd | DEBUG | txt2img | txt2img: id_task=task(o4x123fl57dyik3)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:24,663 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:24,664 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:24,670 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details'] patch=0.00 load=0.00
2023-12-07 19:30:24,844 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:30:31,067 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.01
2023-12-07 19:30:31,144 | sd | DEBUG | images | Saving: image="outputs\text\00635-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:30:31,149 | sd | INFO | processing | Processed: images=1 time=6.48 its=7.72 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:32,226 | sd | DEBUG | txt2img | txt2img: id_task=task(00wl0uhhk6zglqh)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:32,227 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:32,228 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:32,234 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details'] patch=0.00 load=0.00
2023-12-07 19:30:32,406 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:30:38,713 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.014
2023-12-07 19:30:38,787 | sd | DEBUG | images | Saving: image="outputs\text\00636-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:30:38,791 | sd | INFO | processing | Processed: images=1 time=6.56 its=7.62 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:39,717 | sd | DEBUG | txt2img | txt2img: id_task=task(9zkj2dlmwh1yaxe)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:39,718 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:39,719 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:39,725 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details'] patch=0.00 load=0.00
2023-12-07 19:30:39,897 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:30:46,138 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.009
2023-12-07 19:30:46,216 | sd | DEBUG | images | Saving: image="outputs\text\00637-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:30:46,219 | sd | INFO | processing | Processed: images=1 time=6.50 its=7.70 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:57,446 | sd | DEBUG | txt2img | txt2img: id_task=task(2jrg1c0wx2ccxzs)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:57,447 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:57,448 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:58,067 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3'] patch=0.00 load=0.61
2023-12-07 19:30:58,334 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:08,510 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.816
2023-12-07 19:31:08,518 | sd | DEBUG | images | Saving: image="outputs\text\00638-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:31:08,521 | sd | INFO | processing | Processed: images=1 time=11.07 its=4.52 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 3.62, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:31:09,078 | sd | DEBUG | txt2img | txt2img: id_task=task(q6o82w9v1qw0fph)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:31:09,078 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:31:09,080 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:31:09,086 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3'] patch=0.00 load=0.00
2023-12-07 19:31:09,319 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:17,432 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 19:31:17,509 | sd | DEBUG | images | Saving: image="outputs\text\00639-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:31:17,513 | sd | INFO | processing | Processed: images=1 time=8.43 its=5.93 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.29, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:31:18,566 | sd | DEBUG | txt2img | txt2img: id_task=task(s7goa2qahp8e642)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:31:18,567 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:31:18,568 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:31:18,574 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3'] patch=0.00 load=0.00
2023-12-07 19:31:18,802 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:26,904 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.009
2023-12-07 19:31:26,981 | sd | DEBUG | images | Saving: image="outputs\text\00640-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:31:26,985 | sd | INFO | processing | Processed: images=1 time=8.41 its=5.94 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.29, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:31:27,766 | sd | DEBUG | txt2img | txt2img: id_task=task(psitzx24rjoxnti)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:31:27,767 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:31:27,768 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:31:27,775 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3'] patch=0.00 load=0.00
2023-12-07 19:31:28,003 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:36,118 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.009
2023-12-07 19:31:36,198 | sd | DEBUG | images | Saving: image="outputs\text\00641-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:31:36,202 | sd | INFO | processing | Processed: images=1 time=8.43 its=5.93 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.29, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:31:36,936 | sd | DEBUG | txt2img | txt2img: id_task=task(zyawufeeutf82z5)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:31:36,937 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:31:36,938 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:31:36,944 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3'] patch=0.00 load=0.00
2023-12-07 19:31:37,169 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:45,284 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.01
2023-12-07 19:31:45,363 | sd | DEBUG | images | Saving: image="outputs\text\00642-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:31:45,367 | sd | INFO | processing | Processed: images=1 time=8.42 its=5.94 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.29, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:31:52,564 | sd | DEBUG | txt2img | txt2img: id_task=task(jr5gmd9wm4c0cua)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, <lora:lo_dress_classic_style3_v1:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:31:52,565 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:31:52,566 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:31:53,350 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3', 'lo_dress_classic_style3_v1'] patch=0.00 load=0.78
2023-12-07 19:31:53,658 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:59,643 | sd | DEBUG | launch | Server: alive=True jobs=1 requests=625 uptime=286 memory=2.02/31.92 backend=Backend.DIFFUSERS state=job="run_settings" 0/-1
2023-12-07 19:32:03,751 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 19:32:03,824 | sd | DEBUG | images | Saving: image="outputs\text\00643-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:32:03,828 | sd | INFO | processing | Processed: images=1 time=11.26 its=4.44 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.29, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:32:04,801 | sd | DEBUG | txt2img | txt2img: id_task=task(4arvn1o78ujt1c6)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, <lora:lo_dress_classic_style3_v1:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:32:04,802 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:32:04,803 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:32:04,809 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3', 'lo_dress_classic_style3_v1'] patch=0.00 load=0.00
2023-12-07 19:32:05,093 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:32:16,957 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.765
2023-12-07 19:32:16,964 | sd | DEBUG | images | Saving: image="outputs\text\00644-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:32:16,967 | sd | INFO | processing | Processed: images=1 time=12.16 its=4.11 memory={'ram': {'used': 2.0, 'total': 31.92}, 'gpu': {'used': 3.68, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:32:20,620 | sd | DEBUG | txt2img | txt2img: id_task=task(aepa96lvla1toka)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, <lora:lo_dress_classic_style3_v1:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:32:20,621 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:32:20,623 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:32:20,633 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3', 'lo_dress_classic_style3_v1'] patch=0.00 load=0.00
2023-12-07 19:32:20,914 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:32:32,759 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.764
2023-12-07 19:32:32,769 | sd | DEBUG | images | Saving: image="outputs\text\00645-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:32:32,773 | sd | INFO | processing | Processed: images=1 time=12.14 its=4.12 memory={'ram': {'used': 2.0, 'total': 31.92}, 'gpu': {'used': 3.68, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:32:33,474 | sd | DEBUG | txt2img | txt2img: id_task=task(srcb6mmtmi95g7s)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, <lora:lo_dress_classic_style3_v1:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:32:33,474 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:32:33,476 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:32:33,482 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3', 'lo_dress_classic_style3_v1'] patch=0.00 load=0.00
2023-12-07 19:32:33,766 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:32:45,769 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.766
2023-12-07 19:32:45,779 | sd | DEBUG | images | Saving: image="outputs\text\00646-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:32:45,783 | sd | INFO | processing | Processed: images=1 time=12.30 its=4.06 memory={'ram': {'used': 2.0, 'total': 31.92}, 'gpu': {'used': 3.69, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:32:46,480 | sd | DEBUG | txt2img | txt2img: id_task=task(05v9r44936gtxqe)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, <lora:lo_dress_classic_style3_v1:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:32:46,481 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:32:46,482 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:32:46,487 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3', 'lo_dress_classic_style3_v1'] patch=0.00 load=0.00
2023-12-07 19:32:46,774 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:32:56,662 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.014
2023-12-07 19:32:56,735 | sd | DEBUG | images | Saving: image="outputs\text\00647-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:32:56,739 | sd | INFO | processing | Processed: images=1 time=10.25 its=4.88 memory={'ram': {'used': 2.0, 'total': 31.92}, 'gpu': {'used': 4.36, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:33:59,691 | sd | DEBUG | launch | Server: alive=True jobs=1 requests=773 uptime=406 memory=2.0/31.92 backend=Backend.DIFFUSERS state=job="run_settings" 0/-1
2023-12-07 19:34:25,378 | sd | INFO | webui | Exiting
vladmandic commented 7 months ago

In this case, would the merge/unmerge overhead be bigger or smaller than, for example, going from an average of 3seconds per generation (with no lora) to an average of 11 seconds with 4 loras?

there is no simple answer. how much processing overhead lora adds doesn't depend on number of loras or even their size, but number of defined blocks inside lora itself as each block requires jump from base model to lora and then jump back. so if lora is large, but relatively simple merge would be much slower. but if lora is complex but relatively small, merge would be fast and on-the-fly processing would have bigger impact.

i might add a secondary method in options so you could choose do you want on-the-fly processing or merge-based. but other than that, there really isn't much i can do here under this issue.

off-topic, you have both freeu and hypertile enabled - its always a best practise when troubleshooting anything to reduce number of parameters. if we're focusing on lora, then all other settings should be at defaults as much as possible.

devils-shadow commented 7 months ago

Thanks and apologies for not disabling freeu/hypertile, I'll be more careful with any future reports.

As for the option to choose between on-the-fly and merge-based, I would consider this an optimal outcome and fix.

vladmandic commented 7 months ago

this has been added in dev branch (changelog notes are updated) and will be merged to master in the next release.