vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.28k stars 377 forks source link

[Issue]: Upscaler in SD XL - lose significant details, blurred, colors and textures become washed out #2174

Closed Kedaranatha closed 9 months ago

Kedaranatha commented 9 months ago

Issue Description

Generating images with the SD XL model using the Upscaler feature yields unpredictable results. Upscaled images lose significant details, blurred, colors and textures become washed out. This issue is consistent in both tex2img and img2img modes. Many artists use Upscaler as an opportunity to resize images while retaining or enhancing the original details or intro new. Here is an example illustrating how significant details are missed during the upscale process. image

left: with Upscaler=4x_nmkd_200k , x1.5 and Force Hires img with meta for the left one with Upscaler=on. Hires resize: from 896x1152 to 1344x1728

reference example Upscale text2img 896x1152 to 1344x1728: image right: sd.next render with Upscaler=ON with 4x-Ultrasharp or any other, x1.5 left: a111 and comfy with the "same" Upscaler settings

note: Denoising strength no dif. tested from 02- 0.7

Details: Img with Upscaler=none painting, DMT journey, women in the amazon forest, detailed face, epic Negative prompt: ugly, tiling, poorly drawn, blurry, blurred, watermark, grainy, signature, cut off, draft, text, logo Steps: 30, Seed: 3884226443, Sampler: Euler, CFG scale: 7.5, Size: 896x1152, Parser: Full parser, Model: SD_XL_sd_xl_base_1.0, Model hash: 31e35c80fc, VAE: sdxl_vae, Variation seed: 1556555378, Variation strength: 0.65, Backend: Diffusers, Version: 2f071c6, Operations: "txt2img, refine", Refiner: SD_XL_sd_xl_refiner_1.0, Image CFG scale: 6, Refiner steps: 25, Refiner start: 0.8, Hires steps: 30, Latent sampler: Euler, CFG rescale: 0.7 image

Img with Upscaler=4xNMKD-Siax_200k or any others | Force Hires =on (or off)

image

painting, DMT journey, women in the amazon forest, detailed face, epic Negative prompt: ugly, tiling, poorly drawn, blurry, blurred, watermark, grainy, signature, cut off, draft, text, logo Steps: 30, Seed: 3884226443, Sampler: Euler, CFG scale: 7.5, Size: 896x1152, Parser: Full parser, Model: SD_XL_sd_xl_base_1.0, Model hash: 31e35c80fc, VAE: sdxl_vae, Variation seed: 1556555378, Variation strength: 0.65, Backend: Diffusers, Version: 2f071c6, Operations: "txt2img, upscale, refine", Hires steps: 30, Hires upscaler: 4x_NMKD-Siax_200k, Hires upscale: 2, Hires resize: 0x0, Hires size: 1792x2304, Denoising strength: 0.5, Latent sampler: Euler, Image CFG scale: 6, CFG rescale: 0.7, Refiner: SD_XL_sd_xl_refiner_1.0, Refiner steps: 25, Refiner start: 0.8 image

img with data:

Version Platform Description

app: SD.next updated: 2023-09-11 hash: 2f071c65

arch: AMD64 cpu: AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD system: Windows release: Windows-10-10.0.22621-SP0 python: 3.10.11

2.0.1+cu118 Autocast half

device: NVIDIA GeForce RTX 3090 (1) (compute_37) (8, 6) cuda: 11.8 cudnn: 8700 driver: 537.13

xformers: diffusers: 0.20.2 transformers: 4.31.0

configured: base:SD_XL\sd_xl_base_1.0.safetensors [31e35c80fc] refiner:SD_XL\sd_xl_refiner_1.0.safetensors [7440042bbd] vae:sdxl_vae.safetensors loaded: base:D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_base_1.0.safetensors refiner:D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_refiner_1.0.safetensors vae:sdxl_vae.safetensors

Relevant log output

Img with Upscaler=4xNMKD-Siax_200k or any others | Force Hires =on

2023-09-11 10:49:07,808 | sd | DEBUG | txt2img | txt2img: id_task=task(vsi7vr9jt2ao4uk)|prompt=painting, DMT journey, women in the amazon forest, detailed face, epic|negative_prompt=ugly, tiling, poorly drawn, blurry, blurred, watermark, grainy, signature, cut off, draft,  text, logo|prompt_styles=[]|steps=30|sampler_index=8|latent_index=8|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=7.5|clip_skip=1|seed=3884226443.0|subseed=1556555378.0|subseed_strength=0.65|seed_resize_from_h=0|seed_resize_from_w=0||height=1152|width=896|enable_hr=True|denoising_strength=0.5|hr_scale=2|hr_upscaler=4x_NMKD-Siax_200k|hr_force=True|hr_second_pass_steps=30|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=25|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[]
2023-09-11 10:49:09,956 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 7.5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 0.0, 'guidance_rescale': 0.7, 'denoising_end': None, 'height': 1152, 'width': 896}
2023-09-11 10:49:17,764 | sd | DEBUG | processing | Init hires: upscaler=4x_NMKD-Siax_200k sampler=Euler resize=0x0 upscale=1792x2304
2023-09-11 10:49:17,766 | sd | INFO | processing_diffusers | Hires: upscaler=4x_NMKD-Siax_200k width=1792 height=2304 images=1
2023-09-11 10:49:17,768 | sd | DEBUG | processing_diffusers | VAE decode: name=sdxl_vae.safetensors dtype=torch.float32 upcast=True images=1
2023-09-11 10:49:17,769 | sd | DEBUG | processing_diffusers | Moving to CPU: model=UNet
2023-09-11 10:49:24,896 | sd | INFO | sd_models | Pipeline class changed from StableDiffusionXLPipeline to StableDiffusionXLImg2ImgPipeline
2023-09-11 10:49:25,583 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLImg2ImgPipeline task=DiffusersTaskType.IMAGE_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 61, 'eta': 0.0, 'guidance_rescale': 0.7, 'image': <class 'list'>, 'strength': 0.5}
2023-09-11 10:50:00,217 | sd | DEBUG | launch | Server alive=True jobs=5 requests=350 uptime=943s memory used=7.63 total=63.93 job="txt2img" 0/2
2023-09-11 10:50:11,726 | sd | DEBUG | processing_diffusers | Moving to CPU: model=base
2023-09-11 10:50:14,737 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLImg2ImgPipeline task=DiffusersTaskType.IMAGE_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 1280]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 1280]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 125, 'eta': 0.0, 'strength': 0.5, 'guidance_rescale': 0.7, 'denoising_start': 0.8, 'denoising_end': 1, 'image': <class 'torch.Tensor'>}
2023-09-11 10:51:32,905 | sd | DEBUG | processing_diffusers | VAE decode: name=sdxl_vae.safetensors dtype=torch.float32 upcast=True images=1
2023-09-11 10:51:32,909 | sd | DEBUG | processing_diffusers | Moving to CPU: model=UNet
2023-09-11 10:51:37,114 | sd | DEBUG | processing_diffusers | Moving to CPU: model=refiner
2023-09-11 10:51:38,148 | sd | DEBUG | images | Saving: image=D:\AI\StableDiffusion\AI Img Craft\Outputs\text\00020-painting DMT journey women in the amazon forest.png type=PNG size=1792x2304
2023-09-11 10:51:39,498 | sd | INFO | processing | Processed: images=1 time=151.68s its=0.20 memory={'ram': {'used': 13.97, 'total': 63.93}, 'gpu': {'used': 7.98, 'total': 24.0}, 'retries': 0, 'oom': 0}
2023-09-11 10:52:00,406 | sd | DEBUG | launch | Server alive=True jobs=5 requests=505 uptime=1063s memory used=14.0 total=63.93 idle

Backend

Diffusers

Model

SD-XL

Acknowledgements

vladmandic commented 9 months ago

lets simplify - do that without refine and compare output with and without upscale+hires and post log with --debug showing those two runs (upload, do not copy and paste) as i've requested as i need to trace actual operations.

Kedaranatha commented 9 months ago

Refiner=none | Upscaler = none

2023-09-11 12:14:57,344 | sd | INFO | launch | Starting SD.Next 2023-09-11 12:14:57,348 | sd | INFO | installer | Python 3.10.11 on Windows 2023-09-11 12:14:57,502 | sd | INFO | installer | Version: app=sd.next updated=2023-09-11 hash=2f071c65 url=https://github.com/vladmandic/automatic/tree/master 2023-09-11 12:14:58,025 | sd | INFO | launch | Platform: arch=AMD64 cpu=AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD system=Windows release=Windows-10-10.0.22621-SP0 python=3.10.11 2023-09-11 12:14:58,027 | sd | DEBUG | installer | Setting environment tuning 2023-09-11 12:14:58,028 | sd | DEBUG | installer | Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False 2023-09-11 12:14:58,029 | sd | DEBUG | installer | Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True 2023-09-11 12:14:58,033 | sd | INFO | installer | nVidia CUDA toolkit detected 2023-09-11 12:14:58,185 | sd | DEBUG | installer | Repository update time: Mon Sep 11 06:55:28 2023 2023-09-11 12:14:58,187 | sd | INFO | installer | Verifying requirements 2023-09-11 12:14:58,200 | sd | INFO | installer | Verifying packages 2023-09-11 12:14:58,202 | sd | INFO | installer | Verifying repositories 2023-09-11 12:14:58,295 | sd | DEBUG | installer | Submodule: D:\AI\SD.next\repositories\stable-diffusion-stability-ai / main 2023-09-11 12:14:59,213 | sd | DEBUG | installer | Submodule: D:\AI\SD.next\repositories\taming-transformers / master 2023-09-11 12:15:01,661 | sd | DEBUG | installer | Submodule: D:\AI\SD.next\repositories\BLIP / main 2023-09-11 12:15:02,556 | sd | INFO | installer | Verifying submodules 2023-09-11 12:15:05,537 | sd | DEBUG | installer | Submodule: extensions-builtin/a1111-sd-webui-lycoris / main 2023-09-11 12:15:05,639 | sd | DEBUG | installer | Submodule: extensions-builtin/clip-interrogator-ext / main 2023-09-11 12:15:05,744 | sd | DEBUG | installer | Submodule: extensions-builtin/multidiffusion-upscaler-for-automatic1111 / main 2023-09-11 12:15:05,846 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-extension-system-info / main 2023-09-11 12:15:05,947 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-agent-scheduler / main 2023-09-11 12:15:06,052 | sd | DEBUG | installer | Submodule: extensions-builtin/sd-webui-controlnet / main 2023-09-11 12:15:06,167 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main 2023-09-11 12:15:06,279 | sd | DEBUG | installer | Submodule: extensions-builtin/stable-diffusion-webui-rembg / master 2023-09-11 12:15:06,380 | sd | DEBUG | installer | Submodule: modules/lora / main 2023-09-11 12:15:06,484 | sd | DEBUG | installer | Submodule: modules/lycoris / main 2023-09-11 12:15:06,594 | sd | DEBUG | installer | Submodule: wiki / master 2023-09-11 12:15:06,820 | sd | DEBUG | installer | Installed packages: 215 2023-09-11 12:15:06,821 | sd | DEBUG | installer | Extensions all: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora', 'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'SwinIR'] 2023-09-11 12:15:06,995 | sd | DEBUG | installer | Running extension installer: D:\AI\SD.next\extensions-builtin\clip-interrogator-ext\install.py 2023-09-11 12:15:13,366 | sd | DEBUG | installer | Running extension installer: D:\AI\SD.next\extensions-builtin\sd-extension-system-info\install.py 2023-09-11 12:15:13,859 | sd | DEBUG | installer | Running extension installer: D:\AI\SD.next\extensions-builtin\sd-webui-agent-scheduler\install.py 2023-09-11 12:15:14,384 | sd | DEBUG | installer | Running extension installer: D:\AI\SD.next\extensions-builtin\sd-webui-controlnet\install.py 2023-09-11 12:15:14,917 | sd | DEBUG | installer | Running extension installer: D:\AI\SD.next\extensions-builtin\stable-diffusion-webui-images-browser\install.py 2023-09-11 12:15:15,436 | sd | DEBUG | installer | Running extension installer: D:\AI\SD.next\extensions-builtin\stable-diffusion-webui-rembg\install.py 2023-09-11 12:15:16,120 | sd | DEBUG | installer | Extensions all: ['StyleSelectorXL'] 2023-09-11 12:15:16,285 | sd | INFO | installer | Extensions enabled: ['a1111-sd-webui-lycoris', 'clip-interrogator-ext', 'LDSR', 'Lora', 'multidiffusion-upscaler-for-automatic1111', 'ScuNET', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser', 'stable-diffusion-webui-rembg', 'SwinIR', 'StyleSelectorXL'] 2023-09-11 12:15:16,287 | sd | INFO | installer | Verifying packages 2023-09-11 12:15:16,289 | sd | DEBUG | launch | Setup complete without errors: 1694459716 2023-09-11 12:15:16,296 | sd | INFO | installer | Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0} 2023-09-11 12:15:16,297 | sd | DEBUG | launch | Starting module: <module 'webui' from 'D:\AI\SD.next\webui.py'> 2023-09-11 12:15:16,298 | sd | INFO | launch | Command line args: ['--debug'] debug=True 2023-09-11 12:15:22,951 | sd | DEBUG | loader | Loaded packages: torch=2.0.1+cu118 diffusers=0.20.2 gradio=3.43.2 2023-09-11 12:15:23,352 | sd | DEBUG | shared | Reading: config.json len=26 2023-09-11 12:15:23,354 | sd | INFO | shared | Engine: backend=Backend.DIFFUSERS compute=cuda mode=no_grad device=cuda 2023-09-11 12:15:23,428 | sd | INFO | shared | Device: device=NVIDIA GeForce RTX 3090 (1) (compute_37) (8, 6) cuda=11.8 cudnn=8700 driver=537.13 2023-09-11 12:15:24,059 | sd | DEBUG | webui | Entering start sequence 2023-09-11 12:15:24,061 | sd | DEBUG | webui | Initializing 2023-09-11 12:15:24,064 | sd | INFO | sd_vae | Available VAEs: models\VAE items=2 2023-09-11 12:15:24,066 | sd | INFO | shared | Diffusers disabling uncompatible extensions: ['sd-webui-controlnet', 'multidiffusion-upscaler-for-automatic1111', 'a1111-sd-webui-lycoris'] 2023-09-11 12:15:24,069 | sd | DEBUG | modelloader | Scanning diffusers cache: models\Diffusers models\Diffusers items=0 time=0.00s 2023-09-11 12:15:24,070 | sd | DEBUG | shared | Reading: cache.json len=1 2023-09-11 12:15:24,072 | sd | DEBUG | shared | Reading: metadata.json len=48 2023-09-11 12:15:24,075 | sd | INFO | sd_models | Available models: D:\AI\StableDiffusion\models\Stable-diffusion items=27 time=0.01s 2023-09-11 12:15:24,135 | sd | DEBUG | webui | Loading extensions 2023-09-11 12:15:27,341 | sd | INFO | webui | Extensions time: 3.20s { clip-interrogator-ext=1.30s LDSR=0.05s Lora=0.29s sd-extension-system-info=0.05s sd-webui-agent-scheduler=0.62s stable-diffusion-webui-images-browser=0.14s stable-diffusion-webui-rembg=0.57s SwinIR=0.05s StyleSelectorXL=0.05s ScuNET=0.05s } 2023-09-11 12:15:27,351 | sd | DEBUG | modelloader | FS walk error: [WinError 3] The system cannot find the path specified: 'D:\AI\SD.next\models\RealESRGAN' D:\AI\SD.next\models\RealESRGAN 2023-09-11 12:15:27,354 | sd | DEBUG | modelloader | Loaded upscalers: items=15 2023-09-11 12:15:27,628 | sd | INFO | shared | Loading UI theme: name=black-teal style=Auto 2023-09-11 12:15:27,629 | sd | DEBUG | styles | Loaded styles: folder=styles.csv items=0 2023-09-11 12:15:27,632 | sd | DEBUG | webui | Creating UI 2023-09-11 12:15:27,638 | sd | DEBUG | shared | Reading: ui-config.json len=0 2023-09-11 12:15:27,667 | sd | DEBUG | modelloader | FS walk error: [WinError 3] The system cannot find the path specified: 'D:\AI\SD.next\models\Diffusers' D:\AI\SD.next\models\Diffusers 2023-09-11 12:15:27,680 | sd | DEBUG | ui_extra_networks | Extra networks: page='checkpoints' items=27 subdirs=20 tab=txt2img dirs=['D:\AI\StableDiffusion\models\Stable-diffusion', 'models\Diffusers', 'D:\AI\SD.next\models\Stable-diffusion'] time=0.02 2023-09-11 12:15:27,683 | sd | DEBUG | ui_extra_networks | Extra networks: page='styles' items=0 subdirs=0 tab=txt2img dirs=['styles.csv'] time=0.0 2023-09-11 12:15:27,686 | sd | DEBUG | modelloader | FS walk error: [WinError 3] The system cannot find the path specified: 'D:\AI\SD.next\models\embeddings' D:\AI\SD.next\models\embeddings 2023-09-11 12:15:27,687 | sd | DEBUG | ui_extra_networks | Extra networks: page='textual inversion' items=0 subdirs=0 tab=txt2img dirs=['models\embeddings'] time=0.0 2023-09-11 12:15:27,690 | sd | DEBUG | modelloader | FS walk error: [WinError 3] The system cannot find the path specified: 'D:\AI\SD.next\models\hypernetworks' D:\AI\SD.next\models\hypernetworks 2023-09-11 12:15:27,692 | sd | DEBUG | ui_extra_networks | Extra networks: page='hypernetworks' items=0 subdirs=0 tab=txt2img dirs=['models\hypernetworks'] time=0.0 2023-09-11 12:15:27,695 | sd | DEBUG | ui_extra_networks | Extra networks: page='lora' items=0 subdirs=0 tab=txt2img dirs=['models\Lora'] time=0.0 2023-09-11 12:15:27,859 | sd | DEBUG | shared | Reading: ui-config.json len=0 2023-09-11 12:15:27,891 | sd | INFO | shared | Themes: builtin=6 default=5 external=45 2023-09-11 12:15:28,500 | sd | DEBUG | script_callbacks | Script: 0.53s ui_tabs D:\AI\SD.next\extensions-builtin\stable-diffusion-webui-images-browser\scripts\image_browser.py 2023-09-11 12:15:29,499 | sd | DEBUG | ui_extensions | Extension list refresh: processed=221 installed=13 enabled=10 disabled=3 visible=221 hidden=0 2023-09-11 12:15:29,743 | sd | INFO | webui | Local URL: http://127.0.0.1:7861/ 2023-09-11 12:15:29,744 | sd | DEBUG | webui | Gradio registered functions: 1449 2023-09-11 12:15:29,745 | sd | INFO | middleware | Initializing middleware 2023-09-11 12:15:29,749 | sd | DEBUG | webui | Creating API 2023-09-11 12:15:29,928 | sd | INFO | task_runner | [AgentScheduler] Task queue is empty 2023-09-11 12:15:29,930 | sd | INFO | api | [AgentScheduler] Registering APIs 2023-09-11 12:15:30,035 | sd | DEBUG | webui | Scripts setup: ['X/Y/Z grid:0.007s'] 2023-09-11 12:15:30,036 | sd | DEBUG | sd_models | Model metadata: metadata.json no changes 2023-09-11 12:15:30,038 | sd | DEBUG | devices | Verifying Torch settings 2023-09-11 12:15:30,077 | sd | DEBUG | devices | Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False 2023-09-11 12:15:30,078 | sd | INFO | devices | Setting Torch parameters: dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=no_grad fp16=True bf16=False 2023-09-11 12:15:30,080 | sd | DEBUG | devices | Torch default device: cuda 2023-09-11 12:15:30,081 | sd | DEBUG | sd_models | Select checkpoint: model SD_XL\sd_xl_base_1.0.safetensors [31e35c80fc] 2023-09-11 12:15:30,082 | sd | INFO | sd_vae | Loading diffusers VAE: models\VAE\sdxl_vae.safetensors source=settings 2023-09-11 12:15:30,083 | sd | DEBUG | sd_vae | Diffusers VAE load config: {'low_cpu_mem_usage': False, 'torch_dtype': torch.float16, 'use_safetensors': True, 'variant': 'fp16'} 2023-09-11 12:15:30,084 | sd | DEBUG | sd_models | Model autodetect vae: D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_base_1.0.safetensors pipeline=Stable Diffusion XL size=6.46 GB 2023-09-11 12:15:30,272 | sd | INFO | sd_models | Loading diffuser model: D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_base_1.0.safetensors 2023-09-11 12:15:30,273 | sd | DEBUG | sd_models | Model autodetect model: D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_base_1.0.safetensors pipeline=Stable Diffusion XL size=6.46 GB 2023-09-11 12:15:37,344 | sd | DEBUG | sd_models | Model model: pipeline=StableDiffusionXLPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'variant': 'fp16', 'local_files_only ': True, 'extract_ema': True, 'force_zeros_for_empty_prompt ': True, 'requires_aesthetics_score ': False, 'use_safetensors': True} 2023-09-11 12:15:37,348 | sd | DEBUG | sd_models | Model model: enable VAE slicing 2023-09-11 12:15:37,350 | sd | DEBUG | sd_models | Model model: enable VAE tiling 2023-09-11 12:15:37,363 | sd | DEBUG | sd_models | Model model VAE: name=sdxl_vae.safetensors upcast=True 2023-09-11 12:15:39,146 | sd | INFO | textual_inversion | Loaded embeddings: loaded=0 skipped=0 2023-09-11 12:15:39,147 | sd | INFO | sd_models | Model loaded in 9.11s { load=9.11s } native=512 2023-09-11 12:15:39,397 | sd | DEBUG | devices | gc: collected=11601 device=cuda {'ram': {'used': 1.08, 'total': 63.93}, 'gpu': {'used': 8.13, 'total': 24.0}, 'retries': 0, 'oom': 0} 2023-09-11 12:15:39,399 | sd | INFO | sd_models | Model load finished model: {'ram': {'used': 1.08, 'total': 63.93}, 'gpu': {'used': 8.13, 'total': 24.0}, 'retries': 0, 'oom': 0} 2023-09-11 12:15:39,401 | sd | DEBUG | devices | Verifying Torch settings 2023-09-11 12:15:39,403 | sd | DEBUG | devices | Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False 2023-09-11 12:15:39,404 | sd | INFO | devices | Setting Torch parameters: dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=no_grad fp16=True bf16=False 2023-09-11 12:15:39,405 | sd | DEBUG | devices | Torch default device: cuda 2023-09-11 12:15:39,406 | sd | DEBUG | sd_models | Select checkpoint: refiner SD_XL\sd_xl_refiner_1.0.safetensors [7440042bbd] 2023-09-11 12:15:39,407 | sd | INFO | sd_vae | Loading diffusers VAE: models\VAE\sdxl_vae.safetensors source=settings 2023-09-11 12:15:39,408 | sd | DEBUG | sd_vae | Diffusers VAE load config: {'low_cpu_mem_usage': False, 'torch_dtype': torch.float16, 'use_safetensors': True, 'variant': 'fp16'} 2023-09-11 12:15:39,409 | sd | DEBUG | sd_models | Model autodetect vae: D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_refiner_1.0.safetensors pipeline=Stable Diffusion XL size=5.66 GB 2023-09-11 12:15:39,612 | sd | INFO | sd_models | Loading diffuser refiner: D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_refiner_1.0.safetensors 2023-09-11 12:15:39,614 | sd | DEBUG | sd_models | Model autodetect refiner: D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_refiner_1.0.safetensors pipeline=Stable Diffusion XL size=5.66 GB 2023-09-11 12:15:45,052 | sd | DEBUG | sd_models | Model refiner: pipeline=StableDiffusionXLImg2ImgPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'variant': 'fp16', 'local_files_only ': True, 'extract_ema': True, 'force_zeros_for_empty_prompt ': True, 'requires_aesthetics_score ': False, 'use_safetensors': True} 2023-09-11 12:15:45,054 | sd | DEBUG | sd_models | Model refiner: enable VAE slicing 2023-09-11 12:15:45,055 | sd | DEBUG | sd_models | Model refiner: enable VAE tiling 2023-09-11 12:15:45,065 | sd | DEBUG | sd_models | Model refiner VAE: name=sdxl_vae.safetensors upcast=True 2023-09-11 12:15:45,066 | sd | DEBUG | sd_models | Moving refiner model to CPU 2023-09-11 12:15:45,087 | sd | INFO | textual_inversion | Loaded embeddings: loaded=0 skipped=0 2023-09-11 12:15:45,088 | sd | INFO | sd_models | Model loaded in 5.69s { load=5.69s } native=512 2023-09-11 12:15:45,356 | sd | DEBUG | devices | gc: collected=1235 device=cuda {'ram': {'used': 6.77, 'total': 63.93}, 'gpu': {'used': 8.13, 'total': 24.0}, 'retries': 0, 'oom': 0} 2023-09-11 12:15:45,358 | sd | INFO | sd_models | Model load finished refiner: {'ram': {'used': 6.77, 'total': 63.93}, 'gpu': {'used': 8.13, 'total': 24.0}, 'retries': 0, 'oom': 0} 2023-09-11 12:15:45,359 | sd | DEBUG | shared | Saving: config.json len=1843 2023-09-11 12:15:45,361 | sd | INFO | webui | Startup time: 29.04s { torch=5.27s gradio=0.60s diffusers=0.76s libraries=1.11s extensions=3.20s onchange=0.27s ui-txt2img=0.10s ui-img2img=0.07s ui-settings=0.09s ui-extensions=1.58s ui-defaults=0.06s launch=0.17s api=0.08s app-started=0.21s checkpoint=15.32s } 2023-09-11 12:16:05,219 | sd | INFO | shared | Themes: builtin=6 default=5 external=45 2023-09-11 12:16:26,694 | sd | DEBUG | generation_parameters_copypaste | Paste prompt: painting, DMT journey, women in the amazon forest, detailed face, epic Negative prompt: ugly, tiling, poorly drawn, blurry, blurred, watermark, grainy, signature, cut off, draft, text, logo Steps: 30, Seed: 3884226443, Sampler: Euler, CFG scale: 7.5, Size: 896x1152, Parser: Full parser, Model: SD_XL_sd_xl_base_1.0, Model hash: 31e35c80fc, VAE: sdxl_vae, Variation seed: 1556555378, Variation strength: 0.65, Backend: Diffusers, Version: 2f071c6, Operations: "txt2img, refine", Refiner: SD_XL_sd_xl_refiner_1.0, Image CFG scale: 6, Refiner steps: 25, Refiner start: 0.8, Hires steps: 30, Latent sampler: Euler, CFG rescale: 0.7 2023-09-11 12:16:37,202 | sd | DEBUG | sd_models | Unload weights refiner: {'ram': {'used': 1.13, 'total': 63.93}, 'gpu': {'used': 8.13, 'total': 24.0}, 'retries': 0, 'oom': 0} 2023-09-11 12:16:37,447 | sd | DEBUG | devices | gc: collected=2351 device=cuda {'ram': {'used': 1.1, 'total': 63.93}, 'gpu': {'used': 8.13, 'total': 24.0}, 'retries': 0, 'oom': 0} 2023-09-11 12:16:37,449 | sd | DEBUG | shared | Saving: config.json len=1768 2023-09-11 12:16:37,451 | sd | DEBUG | ui | Setting changed: key=sd_model_refiner, value=None 2023-09-11 12:16:48,257 | sd | DEBUG | txt2img | txt2img: id_task=task(azp11rdt4oaz8ho)|prompt=painting, DMT journey, women in the amazon forest, detailed face, epic|negative_prompt=ugly, tiling, poorly drawn, blurry, blurred, watermark, grainy, signature, cut off, draft, text, logo|prompt_styles=[]|steps=30|sampler_index=8|latent_index=8|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=7.5|clip_skip=1|seed=3884226443.0|subseed=1556555378.0|subseed_strength=0.65|seed_resize_from_h=0|seed_resize_from_w=0||height=1152|width=896|enable_hr=True|denoising_strength=0.5|hr_scale=2|hr_upscaler=None|hr_force=True|hr_second_pass_steps=30|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=25|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[] 2023-09-11 12:16:48,267 | sd | DEBUG | sd_samplers | Sampler: sampler=Euler config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'interpolation_type': 'linear', 'use_karras_sigmas': True} 2023-09-11 12:16:49,307 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 7.5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 0.0, 'guidance_rescale': 0.7, 'denoising_end': None, 'height': 1152, 'width': 896} 2023-09-11 12:16:50,932 | sd | INFO | sd_vae_approx | Loaded VAE-approx model: models\VAE-approx\model.pt 2023-09-11 12:16:58,301 | sd | DEBUG | processing_diffusers | VAE decode: name=sdxl_vae.safetensors dtype=torch.float32 upcast=True images=1 2023-09-11 12:16:58,303 | sd | DEBUG | processing_diffusers | Moving to CPU: model=UNet 2023-09-11 12:17:05,835 | sd | DEBUG | images | Saving: image=D:\AI\StableDiffusion\AI Img Craft\Outputs\text\00021-painting DMT journey women in the amazon forest.png type=PNG size=896x1152 2023-09-11 12:17:06,239 | sd | INFO | processing | Processed: images=1 time=17.98s its=1.67 memory={'ram': {'used': 1.67, 'total': 63.93}, 'gpu': {'used': 8.34, 'total': 24.0}, 'retries': 0, 'oom': 0} 2023-09-11 12:17:59,626 | sd | DEBUG | launch | Server alive=True jobs=2 requests=224 uptime=156s memory used=1.67 total=63.93 idle

Kedaranatha commented 9 months ago

Refiner=none | Upscaler = 4x_NMKD !!!! note: result as expected / all details are present 2023-09-11 12:19:59,639 | sd | DEBUG | launch | Server alive=True jobs=2 requests=248 uptime=276s memory used=1.67 total=63.93 idle 2023-09-11 12:20:05,854 | sd | DEBUG | txt2img | txt2img: id_task=task(wlyugg828rilgws)|prompt=painting, DMT journey, women in the amazon forest, detailed face, epic|negative_prompt=ugly, tiling, poorly drawn, blurry, blurred, watermark, grainy, signature, cut off, draft, text, logo|prompt_styles=[]|steps=30|sampler_index=8|latent_index=8|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=7.5|clip_skip=1|seed=3884226443.0|subseed=1556555378.0|subseed_strength=0.65|seed_resize_from_h=0|seed_resize_from_w=0||height=1152|width=896|enable_hr=True|denoising_strength=0.5|hr_scale=2|hr_upscaler=4x_NMKD-Siax_200k|hr_force=True|hr_second_pass_steps=30|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=25|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[] 2023-09-11 12:20:06,679 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 7.5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 0.0, 'guidance_rescale': 0.7, 'denoising_end': None, 'height': 1152, 'width': 896} 2023-09-11 12:20:14,495 | sd | DEBUG | processing | Init hires: upscaler=4x_NMKD-Siax_200k sampler=Euler resize=0x0 upscale=1792x2304 2023-09-11 12:20:14,497 | sd | INFO | processing_diffusers | Hires: upscaler=4x_NMKD-Siax_200k width=1792 height=2304 images=1 2023-09-11 12:20:14,499 | sd | DEBUG | processing_diffusers | VAE decode: name=sdxl_vae.safetensors dtype=torch.float32 upcast=True images=1 2023-09-11 12:20:14,500 | sd | DEBUG | processing_diffusers | Moving to CPU: model=UNet 2023-09-11 12:20:21,861 | sd | INFO | sd_models | Pipeline class changed from StableDiffusionXLPipeline to StableDiffusionXLImg2ImgPipeline 2023-09-11 12:20:22,540 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLImg2ImgPipeline task=DiffusersTaskType.IMAGE_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 61, 'eta': 0.0, 'guidance_rescale': 0.7, 'image': <class 'list'>, 'strength': 0.5} 2023-09-11 12:21:08,836 | sd | DEBUG | processing_diffusers | VAE decode: name=sdxl_vae.safetensors dtype=torch.float32 upcast=True images=1 2023-09-11 12:21:08,837 | sd | DEBUG | processing_diffusers | Moving to CPU: model=UNet 2023-09-11 12:21:15,149 | sd | DEBUG | images | Saving: image=D:\AI\StableDiffusion\AI Img Craft\Outputs\text\00022-painting DMT journey women in the amazon forest.png type=PNG size=1792x2304 2023-09-11 12:21:16,735 | sd | INFO | processing | Processed: images=1 time=70.88s its=0.42 memory={'ram': {'used': 1.76, 'total': 63.93}, 'gpu': {'used': 8.45, 'total': 24.0}, 'retries': 0, 'oom': 0} 2023-09-11 12:21:59,875 | sd | DEBUG | launch | Server alive=True jobs=3 requests=353 uptime=397s memory used=1.78 total=63.93 idle

Kedaranatha commented 9 months ago

Refiner=ON | Upscaler = 4x_NMKD note: lose significant details, blurred etc

2023-09-11 12:24:00,115 | sd | DEBUG | launch | Server alive=True jobs=3 requests=377 uptime=517s memory used=1.78 total=63.93 idle 2023-09-11 12:24:06,819 | sd | DEBUG | sd_models | Select checkpoint: refiner SD_XL\sd_xl_refiner_1.0.safetensors [7440042bbd] 2023-09-11 12:24:06,821 | sd | DEBUG | sd_models | Load model weights: existing=False target=D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_refiner_1.0.safetensors info=None 2023-09-11 12:24:06,822 | sd | DEBUG | devices | Verifying Torch settings 2023-09-11 12:24:06,824 | sd | DEBUG | devices | Desired Torch parameters: dtype=FP16 no-half=False no-half-vae=False upscast=False 2023-09-11 12:24:06,825 | sd | INFO | devices | Setting Torch parameters: dtype=torch.float16 vae=torch.float16 unet=torch.float16 context=no_grad fp16=True bf16=False 2023-09-11 12:24:06,827 | sd | DEBUG | devices | Torch default device: cuda 2023-09-11 12:24:06,828 | sd | INFO | sd_vae | Loading diffusers VAE: models\VAE\sdxl_vae.safetensors source=settings 2023-09-11 12:24:06,829 | sd | DEBUG | sd_vae | Diffusers VAE load config: {'low_cpu_mem_usage': False, 'torch_dtype': torch.float16, 'use_safetensors': True, 'variant': 'fp16'} 2023-09-11 12:24:06,830 | sd | DEBUG | sd_models | Model autodetect vae: D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_refiner_1.0.safetensors pipeline=Stable Diffusion XL size=5.66 GB 2023-09-11 12:24:07,047 | sd | INFO | sd_models | Loading diffuser refiner: D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_refiner_1.0.safetensors 2023-09-11 12:24:07,049 | sd | DEBUG | sd_models | Model autodetect refiner: D:\AI\StableDiffusion\models\Stable-diffusion\SD_XL\sd_xl_refiner_1.0.safetensors pipeline=Stable Diffusion XL size=5.66 GB 2023-09-11 12:24:12,418 | sd | DEBUG | sd_models | Model refiner: pipeline=StableDiffusionXLImg2ImgPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'variant': 'fp16', 'local_files_only ': True, 'extract_ema': True, 'force_zeros_for_empty_prompt ': True, 'requires_aesthetics_score ': False, 'use_safetensors': True} 2023-09-11 12:24:12,420 | sd | DEBUG | sd_models | Model refiner: enable VAE slicing 2023-09-11 12:24:12,421 | sd | DEBUG | sd_models | Model refiner: enable VAE tiling 2023-09-11 12:24:12,431 | sd | DEBUG | sd_models | Model refiner VAE: name=sdxl_vae.safetensors upcast=True 2023-09-11 12:24:12,433 | sd | DEBUG | sd_models | Moving refiner model to CPU 2023-09-11 12:24:12,453 | sd | INFO | textual_inversion | Loaded embeddings: loaded=0 skipped=0 2023-09-11 12:24:12,455 | sd | INFO | sd_models | Model loaded in 5.63s { load=5.63s } native=512 2023-09-11 12:24:12,737 | sd | DEBUG | devices | gc: collected=25750 device=cuda {'ram': {'used': 7.43, 'total': 63.93}, 'gpu': {'used': 8.35, 'total': 24.0}, 'retries': 0, 'oom': 0} 2023-09-11 12:24:12,739 | sd | INFO | sd_models | Model load finished refiner: {'ram': {'used': 7.43, 'total': 63.93}, 'gpu': {'used': 8.35, 'total': 24.0}, 'retries': 0, 'oom': 0} 2023-09-11 12:24:12,741 | sd | DEBUG | shared | Saving: config.json len=1843 2023-09-11 12:24:12,742 | sd | DEBUG | ui | Setting changed: key=sd_model_refiner, value=SD_XL\sd_xl_refiner_1.0.safetensors [7440042bbd] 2023-09-11 12:24:21,291 | sd | DEBUG | txt2img | txt2img: id_task=task(yogqsfkq7v6rsrh)|prompt=painting, DMT journey, women in the amazon forest, detailed face, epic|negative_prompt=ugly, tiling, poorly drawn, blurry, blurred, watermark, grainy, signature, cut off, draft, text, logo|prompt_styles=[]|steps=30|sampler_index=8|latent_index=8|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=7.5|clip_skip=1|seed=3884226443.0|subseed=1556555378.0|subseed_strength=0.65|seed_resize_from_h=0|seed_resize_from_w=0||height=1152|width=896|enable_hr=True|denoising_strength=0.5|hr_scale=2|hr_upscaler=4x_NMKD-Siax_200k|hr_force=True|hr_second_pass_steps=30|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=25|refiner_start=0.8|refiner_prompt=|refiner_negative=|override_settings_texts=[] 2023-09-11 12:24:21,297 | sd | INFO | sd_models | Pipeline class changed from StableDiffusionXLImg2ImgPipeline to StableDiffusionXLPipeline 2023-09-11 12:24:22,132 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 7.5, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 30, 'eta': 0.0, 'guidance_rescale': 0.7, 'denoising_end': None, 'height': 1152, 'width': 896} 2023-09-11 12:24:29,984 | sd | DEBUG | processing | Init hires: upscaler=4x_NMKD-Siax_200k sampler=Euler resize=0x0 upscale=1792x2304 2023-09-11 12:24:29,985 | sd | INFO | processing_diffusers | Hires: upscaler=4x_NMKD-Siax_200k width=1792 height=2304 images=1 2023-09-11 12:24:29,987 | sd | DEBUG | processing_diffusers | VAE decode: name=sdxl_vae.safetensors dtype=torch.float32 upcast=True images=1 2023-09-11 12:24:29,988 | sd | DEBUG | processing_diffusers | Moving to CPU: model=UNet 2023-09-11 12:24:37,157 | sd | INFO | sd_models | Pipeline class changed from StableDiffusionXLPipeline to StableDiffusionXLImg2ImgPipeline 2023-09-11 12:24:37,835 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLImg2ImgPipeline task=DiffusersTaskType.IMAGE_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 2048]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 2048]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 61, 'eta': 0.0, 'guidance_rescale': 0.7, 'image': <class 'list'>, 'strength': 0.5} 2023-09-11 12:25:20,384 | sd | DEBUG | processing_diffusers | Moving to CPU: model=base 2023-09-11 12:25:21,591 | sd | DEBUG | sd_samplers | Sampler: sampler=Euler config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'interpolation_type': 'linear', 'use_karras_sigmas': True} 2023-09-11 12:25:23,285 | sd | DEBUG | processing_diffusers | Diffuser pipeline: StableDiffusionXLImg2ImgPipeline task=DiffusersTaskType.IMAGE_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 1280]), 'pooled_prompt_embeds': torch.Size([1, 1280]), 'negative_prompt_embeds': torch.Size([1, 77, 1280]), 'negative_pooled_prompt_embeds': torch.Size([1, 1280]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent', 'num_inference_steps': 125, 'eta': 0.0, 'strength': 0.5, 'guidance_rescale': 0.7, 'denoising_start': 0.8, 'denoising_end': 1, 'image': <class 'torch.Tensor'>} 2023-09-11 12:26:00,237 | sd | DEBUG | launch | Server alive=True jobs=4 requests=528 uptime=637s memory used=8.41 total=63.93 job="txt2img" 0/2 2023-09-11 12:26:43,007 | sd | DEBUG | processing_diffusers | VAE decode: name=sdxl_vae.safetensors dtype=torch.float32 upcast=True images=1 2023-09-11 12:26:43,009 | sd | DEBUG | processing_diffusers | Moving to CPU: model=UNet 2023-09-11 12:26:46,630 | sd | DEBUG | processing_diffusers | Moving to CPU: model=refiner 2023-09-11 12:26:47,726 | sd | DEBUG | images | Saving: image=D:\AI\StableDiffusion\AI Img Craft\Outputs\text\00023-painting DMT journey women in the amazon forest.png type=PNG size=1792x2304 2023-09-11 12:26:49,071 | sd | INFO | processing | Processed: images=1 time=147.78s its=0.20 memory={'ram': {'used': 13.98, 'total': 63.93}, 'gpu': {'used': 10.16, 'total': 24.0}, 'retries': 0, 'oom': 0}

Kedaranatha commented 9 months ago

Refiner=none | Upscaler = 4x_NMKD Refiner-none  Upscaler - 4x_NMKD

Kedaranatha commented 9 months ago

Refiner=ON | Upscaler = 4x_NMKD note: lose significant details, blurred etc 00023-painting DMT journey women in the amazon forest

Kedaranatha commented 9 months ago

Refiner=none | Upscaler = none

00021-painting DMT journey women in the amazon forest

Kedaranatha commented 9 months ago

lets simplify - do that without refine and compare output with and without upscale+hires and post log with --debug showing those two runs (upload, do not copy and paste) as i've requested as i need to trace actual operations.

Done. Please let me know if you need any other debugging information, but it is subjectively clear that +Refiner is causing this issue..

vladmandic commented 9 months ago

refiner is designed to have latents as input and if you're running a non-latent upscaler. so refiner takes upscaled image as actual image and overly smoothens it - totally not surpised.

can it be made to work? it would need upscaled image to be re-converted back to latent format and i just don't see that as a valid use of base+upscale+refiner. you can use base+upscale+hires and skip refiner.

can this be made to work? sure. but this is more of a "i came up with a workflow" than using product the way its supposed to be used. does it work in comfy? perhaps it does. i don't know what comfy does with latent space conversions back-and-forth.

i'll keep it as open and in the backlog. if someone wants to contribute, i'm open to prs.

Kedaranatha commented 9 months ago

I see. design vs developer community are different :) FYI I'm getting the same render with latent upscalers also

image image
vladmandic commented 9 months ago

please don't make this into a holy war - its not "design vs developer" - and i don't appriciate that. its just not what refiner was intended to do. if you want to use it in a way it was never intended to be used, then yes, results may be suboptimimal. can it be improved? for sure. is it a priority? no.

Kedaranatha commented 9 months ago

Okay, got it. Thank you. This is not a war; this is the way it is :) You're right "i came up with a workflow", creative people always do so. This flow many designers in my network and on YouTube were using for 1.5, and they continue with XL. This is why they stick with A111 because in A111, you can follow this workflow without issues. just fyi

vladmandic commented 9 months ago

you did not use that workflow in sd15 since refiner did not exist in sd15. and again, please stop with the holy war. i accepted the issue, but your comments are not really motivating me to fix it.

Kedaranatha commented 9 months ago

Sounds good. Problem solved by switching to ComfyUI,, aa11 also don't have such issue

vladmandic commented 9 months ago

problem is valid, don't close just because you feel you've been wronged. your analysis is spot-on. your comments are not.

vladmandic commented 9 months ago

fixed. see https://github.com/vladmandic/automatic/blob/master/CHANGELOG.md for details.