Closed devils-shadow closed 7 months ago
as per your instructions here's what I did, taking into consideration the warmup period:
gen parameters: diffusers backend, prompt girl in field
, 50 steps, sd1.5/dreamshaper, all other settings default, using xformers
1st run, 5 single image runs, 50 steps, prompt only, no lora
2nd run, 5 single image runs, 50 steps, prompt + 1 lora
3rd run, 5 single image runs, 50 steps, prompt + 2 lora
4th run, 5 single image runs, 50 steps, prompt + 3 lora
5th run, 5 single image runs, 50 steps, prompt + 4 lora
below are the average times for these runs
based on your explanation, or at least what I understand of it, slowdown is normal due to the new way loras are now handled. In this case, would the merge/unmerge overhead be bigger or smaller than, for example, going from an average of 3seconds per generation (with no lora) to an average of 11 seconds with 4 loras?
and here's a screenshot of the ui/settings used
below is the log for the generations averaged above
2023-12-07 19:26:35,228 | sd | INFO | launch | Starting SD.Next
2023-12-07 19:26:35,231 | sd | INFO | installer | Logger: file="E:\automatic\sdnext.log" level=INFO size=1768219 mode=append
2023-12-07 19:26:35,232 | sd | INFO | installer | Python 3.10.11 on Windows
2023-12-07 19:26:35,234 | sd | WARNING | installer | Running GIT reset
2023-12-07 19:26:37,954 | sd | INFO | installer | GIT reset complete
2023-12-07 19:26:38,076 | sd | INFO | installer | Version: app=sd.next updated=2023-12-04 hash=93f35ccf url=https://github.com/vladmandic/automatic/tree/master
2023-12-07 19:26:38,485 | sd | INFO | launch | Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows release=Windows-10-10.0.22631-SP0 python=3.10.11
2023-12-07 19:26:38,487 | sd | DEBUG | installer | Setting environment tuning
2023-12-07 19:26:38,487 | sd | DEBUG | installer | Cache folder: C:\Users\devil\.cache\huggingface\hub
2023-12-07 19:26:38,487 | sd | DEBUG | installer | Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
2023-12-07 19:26:38,487 | sd | DEBUG | installer | Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
2023-12-07 19:26:38,488 | sd | INFO | installer | nVidia CUDA toolkit detected: nvidia-smi present
2023-12-07 19:26:44,813 | sd | INFO | launch | Startup: standard
2023-12-07 19:26:44,814 | sd | INFO | installer | Verifying requirements
2023-12-07 19:26:44,822 | sd | INFO | installer | Verifying packages
2023-12-07 19:26:44,824 | sd | INFO | installer | Verifying submodules
2023-12-07 19:26:46,495 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-chainner / main
2023-12-07 19:26:47,212 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-system-info / main
2023-12-07 19:26:47,893 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-agent-scheduler / main
2023-12-07 19:26:48,606 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-controlnet / main
2023-12-07 19:26:49,308 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main
2023-12-07 19:26:50,008 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
2023-12-07 19:26:50,710 | sd | DEBUG | installer | Submodule: modules/k-diffusion / master
2023-12-07 19:26:51,629 | sd | DEBUG | installer | Submodule: modules/lora / main
2023-12-07 19:26:52,326 | sd | DEBUG | installer | Submodule: wiki / master
2023-12-07 19:26:52,988 | sd | DEBUG | paths | Register paths
2023-12-07 19:26:53,093 | sd | DEBUG | installer | Installed packages: 226
2023-12-07 19:26:53,093 | sd | DEBUG | installer | Extensions all: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg']
2023-12-07 19:26:53,150 | sd | DEBUG | installer | Submodule: extensions-builtin\clip-interrogator-ext / main
2023-12-07 19:26:53,790 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\clip-interrogator-ext\install.py
2023-12-07 19:26:59,543 | sd | DEBUG | installer | Submodule: extensions-builtin\sd-extension-chainner / main
2023-12-07 19:27:00,539 | sd | DEBUG | installer | Submodule: extensions-builtin\sd-extension-system-info / main
2023-12-07 19:27:01,162 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\sd-extension-system-info\install.py
2023-12-07 19:27:01,555 | sd | DEBUG | installer | Submodule: extensions-builtin\sd-webui-agent-scheduler / main
2023-12-07 19:27:02,201 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
2023-12-07 19:27:02,596 | sd | DEBUG | installer | Submodule: extensions-builtin\stable-diffusion-webui-images-browser / main
2023-12-07 19:27:03,235 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\stable-diffusion-webui-images-browser\install.py
2023-12-07 19:27:03,633 | sd | DEBUG | installer | Submodule: extensions-builtin\stable-diffusion-webui-rembg / master
2023-12-07 19:27:04,273 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
2023-12-07 19:27:04,614 | sd | DEBUG | installer | Extensions all: ['a1111-sd-webui-tagcomplete', 'adetailer', 'stable-diffusion-webui-wildcards', 'ultimate-upscale-for-automatic1111']
2023-12-07 19:27:04,669 | sd | DEBUG | installer | Submodule: extensions\a1111-sd-webui-tagcomplete / main
2023-12-07 19:27:05,448 | sd | DEBUG | installer | Submodule: extensions\adetailer / main
2023-12-07 19:27:06,104 | sd | DEBUG | installer | Running extension installer: E:\automatic\extensions\adetailer\install.py
2023-12-07 19:27:06,537 | sd | DEBUG | installer | Submodule: extensions\stable-diffusion-webui-wildcards / master
2023-12-07 19:27:07,316 | sd | DEBUG | installer | Submodule: extensions\ultimate-upscale-for-automatic1111 / master
2023-12-07 19:27:08,037 | sd | INFO | installer | Extensions enabled: ['clip-interrogator-ext', 'Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'a1111-sd-webui-tagcomplete', 'adetailer', 'stable-diffusion-webui-wildcards', 'ultimate-upscale-for-automatic1111']
2023-12-07 19:27:08,038 | sd | INFO | installer | Verifying requirements
2023-12-07 19:27:08,043 | sd | INFO | installer | Updating Wiki
2023-12-07 19:27:08,097 | sd | DEBUG | installer | Submodule: E:\automatic\wiki / master
2023-12-07 19:27:08,746 | sd | DEBUG | launch | Setup complete without errors: 1701970029
2023-12-07 19:27:08,755 | sd | INFO | installer | Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
2023-12-07 19:27:08,756 | sd | DEBUG | launch | Starting module: <module 'webui' from 'E:\\automatic\\webui.py'>
2023-12-07 19:27:08,757 | sd | INFO | launch | Command line args: ['--reset', '--upgrade'] reset=True upgrade=True
2023-12-07 19:27:13,106 | sd | INFO | loader | Load packages: torch=2.1.1+cu121 diffusers=0.24.0 gradio=3.43.2
2023-12-07 19:27:13,880 | sd | DEBUG | shared | Read: file="config.json" json=51 bytes=2098
2023-12-07 19:27:13,880 | sd | DEBUG | shared | Unknown settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 19:27:13,881 | sd | INFO | shared | Engine: backend=Backend.DIFFUSERS compute=cuda mode=no_grad device=cuda cross-optimization="xFormers"
2023-12-07 19:27:13,925 | sd | INFO | shared | Device: device=NVIDIA GeForce RTX 4070 n=1 arch=sm_90 cap=(8, 9) cuda=12.1 cudnn=8801 driver=546.29
2023-12-07 19:27:22,161 | sd | DEBUG | webui | Entering start sequence
2023-12-07 19:27:22,164 | sd | DEBUG | webui | Initializing
2023-12-07 19:27:22,166 | sd | INFO | sd_vae | Available VAEs: path="models\VAE" items=6
2023-12-07 19:27:22,168 | sd | INFO | shared | Disabling uncompatible extensions: backend=Backend.DIFFUSERS ['a1111-sd-webui-lycoris', 'sd-webui-animatediff']
2023-12-07 19:27:22,171 | sd | DEBUG | modelloader | Scanning diffusers cache: models\Diffusers models\Diffusers items=1 time=0.00
2023-12-07 19:27:22,176 | sd | DEBUG | shared | Read: file="cache.json" json=2 bytes=8090
2023-12-07 19:27:22,182 | sd | DEBUG | shared | Read: file="metadata.json" json=111 bytes=144812
2023-12-07 19:27:22,188 | sd | INFO | sd_models | Available models: path="models\Stable-diffusion" items=20 time=0.02
2023-12-07 19:27:22,856 | sd | DEBUG | webui | Load extensions
2023-12-07 19:27:24,130 | sd | INFO | script_loading | Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
2023-12-07 19:27:26,140 | sd | INFO | script_loading | Extension: script='extensions\a1111-sd-webui-tagcomplete\scripts\tag_autocomplete_helper.py' Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
2023-12-07 19:27:27,026 | sd | INFO | script_loading | Extension: script='extensions\adetailer\scripts\!adetailer.py' [-] ADetailer initialized. version: 23.11.1, num models: 9
2023-12-07 19:27:27,035 | sd | INFO | webui | Extensions time: 4.18 { clip-interrogator-ext=0.72 Lora=0.05 sd-extension-chainner=0.07 sd-webui-agent-scheduler=0.40 stable-diffusion-webui-images-browser=0.15 stable-diffusion-webui-rembg=1.82 adetailer=0.88 }
2023-12-07 19:27:27,076 | sd | DEBUG | shared | Read: file="html/upscalers.json" json=4 bytes=2672
2023-12-07 19:27:27,080 | sd | DEBUG | shared | Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719
2023-12-07 19:27:27,081 | sd | DEBUG | chainner_model | chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=5
2023-12-07 19:27:27,085 | sd | DEBUG | modelloader | Load upscalers: total=52 downloaded=15 user=0 time=0.05 ['None', 'Lanczos', 'Nearest', 'ChaiNNer', 'ESRGAN', 'LDSR', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
2023-12-07 19:27:27,506 | sd | DEBUG | styles | Load styles: folder="models\styles" items=289 time=0.42
2023-12-07 19:27:27,512 | sd | DEBUG | webui | Creating UI
2023-12-07 19:27:27,732 | sd | INFO | theme | Load UI theme: name="black-teal" style=Auto base=sdnext.css
2023-12-07 19:27:27,837 | sd | DEBUG | shared | Read: file="html\reference.json" json=18 bytes=11012
2023-12-07 19:27:28,036 | sd | DEBUG | ui_extra_networks | Extra networks: page='model' items=38 subfolders=5 tab=txt2img folders=['models\\Stable-diffusion', 'models\\Diffusers', 'models\\Reference', 'E:\\automatic\\models\\Stable-diffusion'] list=0.09 desc=0.02 info=0.13 workers=2
2023-12-07 19:27:28,052 | sd | DEBUG | ui_extra_networks | Extra networks: page='style' items=289 subfolders=2 tab=txt2img folders=['models\\styles', 'html'] list=0.02 desc=0.00 info=0.00 workers=2
2023-12-07 19:27:28,054 | sd | DEBUG | ui_extra_networks | Extra networks: page='embedding' items=11 subfolders=1 tab=txt2img folders=['models\\embeddings'] list=0.04 desc=0.00 info=0.05 workers=2
2023-12-07 19:27:28,054 | sd | DEBUG | ui_extra_networks | Extra networks: page='hypernetwork' items=0 subfolders=1 tab=txt2img folders=['models\\hypernetworks'] list=0.00 desc=0.00 info=0.00 workers=2
2023-12-07 19:27:28,055 | sd | DEBUG | ui_extra_networks | Extra networks: page='vae' items=6 subfolders=1 tab=txt2img folders=['models\\VAE'] list=0.03 desc=0.00 info=0.02 workers=2
2023-12-07 19:27:28,058 | sd | DEBUG | ui_extra_networks | Extra networks: page='lora' items=38 subfolders=1 tab=txt2img folders=['models\\Lora', 'models\\LyCORIS'] list=0.11 desc=0.01 info=0.18 workers=2
2023-12-07 19:27:28,228 | sd | DEBUG | shared | Read: file="ui-config.json" json=0 bytes=2
2023-12-07 19:27:28,547 | sd | DEBUG | theme | Themes: builtin=6 default=5 external=55
2023-12-07 19:27:29,955 | sd | DEBUG | script_callbacks | Script: 1.31 ui_tabs E:\automatic\extensions-builtin\stable-diffusion-webui-images-browser\scripts\image_browser.py
2023-12-07 19:27:29,964 | sd | DEBUG | shared | Read: file="E:\automatic\html\extensions.json" json=328 bytes=191889
2023-12-07 19:27:30,947 | sd | DEBUG | ui_extensions | Extension list: processed=312 installed=13 enabled=11 disabled=2 visible=312 hidden=0
2023-12-07 19:27:31,248 | sd | INFO | webui | Local URL: http://127.0.0.1:7860/
2023-12-07 19:27:31,249 | sd | DEBUG | webui | Gradio functions: registered=1751
2023-12-07 19:27:31,250 | sd | INFO | middleware | Initializing middleware
2023-12-07 19:27:31,255 | sd | DEBUG | webui | Creating API
2023-12-07 19:27:31,397 | sd | INFO | task_runner | [AgentScheduler] Task queue is empty
2023-12-07 19:27:31,398 | sd | INFO | api | [AgentScheduler] Registering APIs
2023-12-07 19:27:31,512 | sd | DEBUG | webui | Scripts setup: ['X/Y/Z Grid:0.006', 'ADetailer:0.02']
2023-12-07 19:27:31,512 | sd | DEBUG | sd_models | Model metadata: file="metadata.json" no changes
2023-12-07 19:27:31,512 | sd | DEBUG | webui | Model auto load disabled
2023-12-07 19:27:31,513 | sd | DEBUG | shared | Save: file="config.json" json=51 bytes=2033
2023-12-07 19:27:31,513 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 19:27:31,513 | sd | INFO | webui | Startup time: 22.75 { torch=2.40 gradio=1.85 diffusers=0.09 libraries=9.06 extensions=4.18 face-restore=0.67 extra-networks=0.43 ui-extra-networks=0.55 ui-img2img=0.06 ui-settings=0.40 ui-extensions=2.35 ui-defaults=0.06 launch=0.23 api=0.08 app-started=0.18 }
2023-12-07 19:27:59,532 | sd | DEBUG | launch | Server: alive=True jobs=0 requests=2 uptime=46 memory=1.12/31.92 backend=Backend.DIFFUSERS state=idle
2023-12-07 19:28:04,958 | sd | INFO | api | MOTD: N/A
2023-12-07 19:28:07,280 | sd | DEBUG | theme | Themes: builtin=6 default=5 external=55
2023-12-07 19:28:07,390 | sd | INFO | api | Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36
2023-12-07 19:28:13,950 | sd | DEBUG | modelloader | Scanning diffusers cache: models\Diffusers models\Diffusers items=1 time=0.00
2023-12-07 19:28:13,952 | sd | INFO | sd_models | Available models: path="models\Stable-diffusion" items=20 time=0.00
2023-12-07 19:28:23,192 | sd | INFO | sd_models | Select: model="SD1.5\dreamshaper_8 [879db523c3]"
2023-12-07 19:28:23,194 | sd | DEBUG | sd_models | Load model weights: existing=False target=E:\automatic\models\Stable-diffusion\SD1.5\dreamshaper_8.safetensors info=None
2023-12-07 19:28:23,524 | sd | DEBUG | devices | Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False
2023-12-07 19:28:23,524 | sd | INFO | devices | Setting Torch parameters: device=cuda dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=inference_mode fp16=True bf16=False
2023-12-07 19:28:23,526 | sd | INFO | sd_models | Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline file="E:\automatic\models\Stable-diffusion\SD1.5\dreamshaper_8.safetensors" size=2034MB
2023-12-07 19:28:31,675 | sd | DEBUG | sd_models | Setting model: pipeline=StableDiffusionPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'variant': 'fp16', 'extract_ema': True, 'force_zeros_for_empty_prompt ': True, 'requires_aesthetics_score': False, 'use_safetensors': True}
2023-12-07 19:28:31,680 | sd | DEBUG | sd_models | Setting model VAE: name=None upcast=True
2023-12-07 19:28:32,839 | sd | INFO | textual_inversion | Load embeddings: loaded=10 skipped=1 time=0.61
2023-12-07 19:28:33,110 | sd | DEBUG | devices | gc: collected=1419 device=cuda {'ram': {'used': 4.55, 'total': 31.92}, 'gpu': {'used': 3.3, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:28:33,113 | sd | INFO | sd_models | Load model: time=9.65 { load=9.64 } native=512 {'ram': {'used': 4.55, 'total': 31.92}, 'gpu': {'used': 3.3, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:28:33,117 | sd | DEBUG | shared | Save: file="config.json" json=50 bytes=1991
2023-12-07 19:28:33,118 | sd | DEBUG | shared | Unused settings: ['multiple_tqdm', 'animatediff_model_path', 'animatediff_s3_host', 'animatediff_s3_port', 'animatediff_s3_access_key', 'animatediff_s3_secret_key', 'animatediff_s3_storge_bucket']
2023-12-07 19:28:33,118 | sd | INFO | ui | Settings: changed=1 ['sd_vae']
2023-12-07 19:28:49,472 | sd | DEBUG | txt2img | txt2img: id_task=task(v3jmyo3z5xupx33)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:28:49,473 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:28:49,475 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:28:50,096 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:28:55,642 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.855
2023-12-07 19:28:55,660 | sd | DEBUG | images | Saving: image="outputs\text\00623-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 19:28:55,664 | sd | INFO | processing | Processed: images=1 time=6.16 its=8.11 memory={'ram': {'used': 1.95, 'total': 31.92}, 'gpu': {'used': 3.4, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:02,144 | sd | DEBUG | txt2img | txt2img: id_task=task(15x1kbu7sp0zw8c)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:02,145 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:02,146 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:02,214 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:06,563 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.749
2023-12-07 19:29:06,575 | sd | DEBUG | images | Saving: image="outputs\text\00624-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 19:29:06,578 | sd | INFO | processing | Processed: images=1 time=4.43 its=11.29 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 3.48, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:07,239 | sd | DEBUG | txt2img | txt2img: id_task=task(deob5k1crsrwrgp)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:07,240 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:07,241 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:07,309 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:09,955 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.011
2023-12-07 19:29:10,030 | sd | DEBUG | images | Saving: image="outputs\text\00625-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 19:29:10,033 | sd | INFO | processing | Processed: images=1 time=2.79 its=17.94 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 4.16, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:12,526 | sd | DEBUG | txt2img | txt2img: id_task=task(ojneu6edz0v3wn9)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:12,526 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:12,528 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:12,594 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:15,259 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.01
2023-12-07 19:29:15,337 | sd | DEBUG | images | Saving: image="outputs\text\00626-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 19:29:15,341 | sd | INFO | processing | Processed: images=1 time=2.81 its=17.81 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 4.16, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:15,995 | sd | DEBUG | txt2img | txt2img: id_task=task(h6l2ic605tfyw1l)|prompt=girl in field, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:15,996 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:15,997 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:16,065 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:18,697 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.009
2023-12-07 19:29:18,775 | sd | DEBUG | images | Saving: image="outputs\text\00627-dreamshaper_8-girl in field.jpg" type=JPEG size=512x512
2023-12-07 19:29:18,779 | sd | INFO | processing | Processed: images=1 time=2.78 its=18.01 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 4.16, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:28,995 | sd | DEBUG | txt2img | txt2img: id_task=task(e8vk4guvsk0iamc)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:28,995 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:28,997 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:29,717 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.71
2023-12-07 19:29:29,860 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:36,059 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.758
2023-12-07 19:29:36,067 | sd | DEBUG | images | Saving: image="outputs\text\00628-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 19:29:36,071 | sd | INFO | processing | Processed: images=1 time=7.07 its=7.07 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 3.52, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:36,599 | sd | DEBUG | txt2img | txt2img: id_task=task(smgo0c64sk0hgiu)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:36,599 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:36,601 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:36,608 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.00
2023-12-07 19:29:36,736 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:41,071 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 19:29:41,147 | sd | DEBUG | images | Saving: image="outputs\text\00629-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 19:29:41,151 | sd | INFO | processing | Processed: images=1 time=4.54 its=11.01 memory={'ram': {'used': 1.97, 'total': 31.92}, 'gpu': {'used': 4.2, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:42,272 | sd | DEBUG | txt2img | txt2img: id_task=task(lj8yy3ghao2ebb1)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:42,272 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:42,273 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:42,280 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.00
2023-12-07 19:29:42,402 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:48,499 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.776
2023-12-07 19:29:48,509 | sd | DEBUG | images | Saving: image="outputs\text\00630-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 19:29:48,513 | sd | INFO | processing | Processed: images=1 time=6.24 its=8.02 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 3.58, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:49,040 | sd | DEBUG | txt2img | txt2img: id_task=task(pk6m7ur3gzsjkej)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:49,041 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:49,042 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:49,048 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.00
2023-12-07 19:29:49,181 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:53,641 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 19:29:53,717 | sd | DEBUG | images | Saving: image="outputs\text\00631-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 19:29:53,721 | sd | INFO | processing | Processed: images=1 time=4.68 its=10.70 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:29:55,431 | sd | DEBUG | txt2img | txt2img: id_task=task(g98eqj1uzs44vhl)|prompt=girl in field, <lora:add_detail:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:29:55,431 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:29:55,433 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:29:55,438 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail'] patch=0.00 load=0.00
2023-12-07 19:29:55,558 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:29:59,596 | sd | DEBUG | launch | Server: alive=True jobs=1 requests=352 uptime=166 memory=1.98/31.92 backend=Backend.DIFFUSERS state=job="run_settings" 0/-1
2023-12-07 19:30:01,794 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.82
2023-12-07 19:30:01,803 | sd | DEBUG | images | Saving: image="outputs\text\00632-dreamshaper_8-girl in field lora add detail 1.jpg" type=JPEG size=512x512
2023-12-07 19:30:01,808 | sd | INFO | processing | Processed: images=1 time=6.37 its=7.85 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 3.57, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:09,445 | sd | DEBUG | txt2img | txt2img: id_task=task(mve8mu8tognhqcz)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:09,445 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:09,447 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:09,557 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details'] patch=0.00 load=0.11
2023-12-07 19:30:09,754 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:30:16,126 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 19:30:16,201 | sd | DEBUG | images | Saving: image="outputs\text\00633-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:30:16,205 | sd | INFO | processing | Processed: images=1 time=6.75 its=7.40 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:17,102 | sd | DEBUG | txt2img | txt2img: id_task=task(hoqxdeegle1hyhv)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:17,103 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:17,104 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:17,110 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details'] patch=0.00 load=0.00
2023-12-07 19:30:17,291 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:30:23,526 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.01
2023-12-07 19:30:23,603 | sd | DEBUG | images | Saving: image="outputs\text\00634-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:30:23,607 | sd | INFO | processing | Processed: images=1 time=6.50 its=7.69 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:24,662 | sd | DEBUG | txt2img | txt2img: id_task=task(o4x123fl57dyik3)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:24,663 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:24,664 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:24,670 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details'] patch=0.00 load=0.00
2023-12-07 19:30:24,844 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:30:31,067 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.01
2023-12-07 19:30:31,144 | sd | DEBUG | images | Saving: image="outputs\text\00635-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:30:31,149 | sd | INFO | processing | Processed: images=1 time=6.48 its=7.72 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:32,226 | sd | DEBUG | txt2img | txt2img: id_task=task(00wl0uhhk6zglqh)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:32,227 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:32,228 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:32,234 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details'] patch=0.00 load=0.00
2023-12-07 19:30:32,406 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:30:38,713 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.014
2023-12-07 19:30:38,787 | sd | DEBUG | images | Saving: image="outputs\text\00636-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:30:38,791 | sd | INFO | processing | Processed: images=1 time=6.56 its=7.62 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:39,717 | sd | DEBUG | txt2img | txt2img: id_task=task(9zkj2dlmwh1yaxe)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:39,718 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:39,719 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:39,725 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details'] patch=0.00 load=0.00
2023-12-07 19:30:39,897 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:30:46,138 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.009
2023-12-07 19:30:46,216 | sd | DEBUG | images | Saving: image="outputs\text\00637-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:30:46,219 | sd | INFO | processing | Processed: images=1 time=6.50 its=7.70 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.19, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:30:57,446 | sd | DEBUG | txt2img | txt2img: id_task=task(2jrg1c0wx2ccxzs)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:30:57,447 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:30:57,448 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:30:58,067 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3'] patch=0.00 load=0.61
2023-12-07 19:30:58,334 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:08,510 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.816
2023-12-07 19:31:08,518 | sd | DEBUG | images | Saving: image="outputs\text\00638-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:31:08,521 | sd | INFO | processing | Processed: images=1 time=11.07 its=4.52 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 3.62, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:31:09,078 | sd | DEBUG | txt2img | txt2img: id_task=task(q6o82w9v1qw0fph)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:31:09,078 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:31:09,080 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:31:09,086 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3'] patch=0.00 load=0.00
2023-12-07 19:31:09,319 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:17,432 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 19:31:17,509 | sd | DEBUG | images | Saving: image="outputs\text\00639-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:31:17,513 | sd | INFO | processing | Processed: images=1 time=8.43 its=5.93 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.29, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:31:18,566 | sd | DEBUG | txt2img | txt2img: id_task=task(s7goa2qahp8e642)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:31:18,567 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:31:18,568 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:31:18,574 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3'] patch=0.00 load=0.00
2023-12-07 19:31:18,802 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:26,904 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.009
2023-12-07 19:31:26,981 | sd | DEBUG | images | Saving: image="outputs\text\00640-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:31:26,985 | sd | INFO | processing | Processed: images=1 time=8.41 its=5.94 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.29, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:31:27,766 | sd | DEBUG | txt2img | txt2img: id_task=task(psitzx24rjoxnti)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:31:27,767 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:31:27,768 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:31:27,775 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3'] patch=0.00 load=0.00
2023-12-07 19:31:28,003 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:36,118 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.009
2023-12-07 19:31:36,198 | sd | DEBUG | images | Saving: image="outputs\text\00641-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:31:36,202 | sd | INFO | processing | Processed: images=1 time=8.43 its=5.93 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.29, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:31:36,936 | sd | DEBUG | txt2img | txt2img: id_task=task(zyawufeeutf82z5)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:31:36,937 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:31:36,938 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:31:36,944 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3'] patch=0.00 load=0.00
2023-12-07 19:31:37,169 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:45,284 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.01
2023-12-07 19:31:45,363 | sd | DEBUG | images | Saving: image="outputs\text\00642-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:31:45,367 | sd | INFO | processing | Processed: images=1 time=8.42 its=5.94 memory={'ram': {'used': 1.98, 'total': 31.92}, 'gpu': {'used': 4.29, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:31:52,564 | sd | DEBUG | txt2img | txt2img: id_task=task(jr5gmd9wm4c0cua)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, <lora:lo_dress_classic_style3_v1:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:31:52,565 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:31:52,566 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:31:53,350 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3', 'lo_dress_classic_style3_v1'] patch=0.00 load=0.78
2023-12-07 19:31:53,658 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:31:59,643 | sd | DEBUG | launch | Server: alive=True jobs=1 requests=625 uptime=286 memory=2.02/31.92 backend=Backend.DIFFUSERS state=job="run_settings" 0/-1
2023-12-07 19:32:03,751 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.012
2023-12-07 19:32:03,824 | sd | DEBUG | images | Saving: image="outputs\text\00643-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:32:03,828 | sd | INFO | processing | Processed: images=1 time=11.26 its=4.44 memory={'ram': {'used': 1.99, 'total': 31.92}, 'gpu': {'used': 4.29, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:32:04,801 | sd | DEBUG | txt2img | txt2img: id_task=task(4arvn1o78ujt1c6)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, <lora:lo_dress_classic_style3_v1:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:32:04,802 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:32:04,803 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:32:04,809 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3', 'lo_dress_classic_style3_v1'] patch=0.00 load=0.00
2023-12-07 19:32:05,093 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:32:16,957 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.765
2023-12-07 19:32:16,964 | sd | DEBUG | images | Saving: image="outputs\text\00644-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:32:16,967 | sd | INFO | processing | Processed: images=1 time=12.16 its=4.11 memory={'ram': {'used': 2.0, 'total': 31.92}, 'gpu': {'used': 3.68, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:32:20,620 | sd | DEBUG | txt2img | txt2img: id_task=task(aepa96lvla1toka)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, <lora:lo_dress_classic_style3_v1:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:32:20,621 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:32:20,623 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:32:20,633 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3', 'lo_dress_classic_style3_v1'] patch=0.00 load=0.00
2023-12-07 19:32:20,914 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:32:32,759 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.764
2023-12-07 19:32:32,769 | sd | DEBUG | images | Saving: image="outputs\text\00645-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:32:32,773 | sd | INFO | processing | Processed: images=1 time=12.14 its=4.12 memory={'ram': {'used': 2.0, 'total': 31.92}, 'gpu': {'used': 3.68, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:32:33,474 | sd | DEBUG | txt2img | txt2img: id_task=task(srcb6mmtmi95g7s)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, <lora:lo_dress_classic_style3_v1:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:32:33,474 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:32:33,476 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:32:33,482 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3', 'lo_dress_classic_style3_v1'] patch=0.00 load=0.00
2023-12-07 19:32:33,766 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:32:45,769 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.766
2023-12-07 19:32:45,779 | sd | DEBUG | images | Saving: image="outputs\text\00646-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:32:45,783 | sd | INFO | processing | Processed: images=1 time=12.30 its=4.06 memory={'ram': {'used': 2.0, 'total': 31.92}, 'gpu': {'used': 3.69, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:32:46,480 | sd | DEBUG | txt2img | txt2img: id_task=task(05v9r44936gtxqe)|prompt=girl in field, <lora:add_detail:1>, <lora:more_details:1>, <lora:3DMM_V3:1>, <lora:lo_dress_classic_style3_v1:1>, |negative_prompt=|prompt_styles=[]|steps=50|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=512|width=512|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-12-07 19:32:46,481 | sd | INFO | sd_hijack_freeu | Applying free-u: b1=1.2 b2=1.4 s1=0.9 s2=0.2
2023-12-07 19:32:46,482 | sd | INFO | sd_hijack_hypertile | Applying hypertile: unet=368
2023-12-07 19:32:46,487 | sd | INFO | extra_networks_lora | Applying LoRA: ['add_detail', 'more_details', '3DMM_V3', 'lo_dress_classic_style3_v1'] patch=0.00 load=0.00
2023-12-07 19:32:46,774 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 50, 'eta': 1.0, 'guidance_rescale': 0.7, 'height': 512, 'width': 512, 'parser': 'Full parser'}
2023-12-07 19:32:56,662 | sd | DEBUG | processing_diffusers | VAE decode: name=baked dtype=torch.float16 upcast=False images=1 latents=torch.Size([1, 4, 64, 64]) time=0.014
2023-12-07 19:32:56,735 | sd | DEBUG | images | Saving: image="outputs\text\00647-dreamshaper_8-girl in field lora add detail 1 lora.jpg" type=JPEG size=512x512
2023-12-07 19:32:56,739 | sd | INFO | processing | Processed: images=1 time=10.25 its=4.88 memory={'ram': {'used': 2.0, 'total': 31.92}, 'gpu': {'used': 4.36, 'total': 11.99}, 'retries': 0, 'oom': 0}
2023-12-07 19:33:59,691 | sd | DEBUG | launch | Server: alive=True jobs=1 requests=773 uptime=406 memory=2.0/31.92 backend=Backend.DIFFUSERS state=job="run_settings" 0/-1
2023-12-07 19:34:25,378 | sd | INFO | webui | Exiting
In this case, would the merge/unmerge overhead be bigger or smaller than, for example, going from an average of 3seconds per generation (with no lora) to an average of 11 seconds with 4 loras?
there is no simple answer. how much processing overhead lora adds doesn't depend on number of loras or even their size, but number of defined blocks inside lora itself as each block requires jump from base model to lora and then jump back. so if lora is large, but relatively simple merge would be much slower. but if lora is complex but relatively small, merge would be fast and on-the-fly processing would have bigger impact.
i might add a secondary method in options so you could choose do you want on-the-fly processing or merge-based. but other than that, there really isn't much i can do here under this issue.
off-topic, you have both freeu and hypertile enabled - its always a best practise when troubleshooting anything to reduce number of parameters. if we're focusing on lora, then all other settings should be at defaults as much as possible.
Thanks and apologies for not disabling freeu/hypertile, I'll be more careful with any future reports.
As for the option to choose between on-the-fly and merge-based, I would consider this an optimal outcome and fix.
this has been added in dev branch (changelog notes are updated) and will be merged to master in the next release.
Issue Description
When using loras in prompts, generation speeds drop proportionately to the number of loras loaded into the prompt. This behavior is replicated consistently across both diffusers and original backends and is sampler agnostic.
Example: sdnext with diffusers, dreamshaper 1.5, all default settings, default extensions, default generation parameters, 50steps: 19it/s. Loading one lora into the prompt drops gen speed to about 11it/s and the speed drop continues as more loras are added.
https://github.com/vladmandic/automatic/assets/17099756/b4dfa5d7-1ce6-4a25-b154-65242ad6cf14
Version Platform Description
SDNext Version: app=sd.next updated=2023-12-04 hash=93f35ccf Python version 3.10.11 Windows 11 23H2 OS build 22631.2792 Nvidia 4070 Driver Version 546.29
Relevant log output
Backend
Diffusers
Branch
Master
Model
SD 1.5
Acknowledgements