Open inferno46n2 opened 4 months ago
Just tested with 1.5 checkpoints in Forge as well. It doesn't even appear to work with 1.5 whereas auto1111 at least works with 1.5 checkpoints.
Console log below
venv "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Version: f0.0.17v1.8.0rc-latest-276-g29be1da7 Commit hash: 29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7 Path C:UsersinferOneDriveDocumentsAuto1111stable-diffusion-webui\models\Stable-diffusion does not exist. Skip setting --ckpt-dir Path C:UsersinferOneDriveDocumentsAuto1111stable-diffusion-webui\models\VAE does not exist. Skip setting --vae-dir Path C:UsersinferOneDriveDocumentsAuto1111stable-diffusion-webui\models\hypernetworks does not exist. Skip setting --hypernetwork-dir Path C:UsersinferOneDriveDocumentsAuto1111stable-diffusion-webui\embeddings does not exist. Skip setting --embeddings-dir Path C:UsersinferOneDriveDocumentsAuto1111stable-diffusion-webui\models\lora does not exist. Skip setting --lora-dir Path C:UsersinferOneDriveDocumentsAuto1111stable-diffusion-webui\models\ControlNet does not exist. Skip setting --controlnet-dir Path C:UsersinferOneDriveDocumentsAuto1111stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads does not exist. Skip setting --controlnet-preprocessor-models-dir Launching Web UI with arguments: --forge-ref-a1111-home C:UsersinferOneDriveDocumentsAuto1111stable-diffusion-webui Total VRAM 24564 MB, total RAM 65277 MB Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4090 : native Hint: your device supports --pin-shared-memory for potential speed improvements. Hint: your device supports --cuda-malloc for potential speed improvements. Hint: your device supports --cuda-stream for potential speed improvements. VAE dtype: torch.bfloat16 CUDA Stream Activated: False Using pytorch cross attention ControlNet preprocessor location: D:\FORGE\stable-diffusion-webui-forge\models\ControlNetPreprocessor Calculating sha256 for D:\FORGE\stable-diffusion-webui-forge\models\Stable-diffusion\aamAnyloraAnimeMixAnime_v1.safetensors: 2024-03-20 21:36:47,768 - ControlNet - INFO - ControlNet UI callback registered. Running on local URL: http://127.0.0.1:7860
To create a public link, set Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8 Will upscale the image by the selected scale factor; use width and height sliders to set tile sizeshare=True
in launch()
.
Startup time: 8.3s (prepare environment: 1.9s, import torch: 2.8s, import gradio: 0.6s, setup paths: 0.6s, other imports: 0.4s, load scripts: 1.2s, create ui: 0.3s, gradio launch: 0.3s).
354b8c571d3abe963e1520d3b4e0647be519841f4376fc5d16f7f5e7859f7d49
Loading weights [354b8c571d] from D:\FORGE\stable-diffusion-webui-forge\models\Stable-diffusion\aamAnyloraAnimeMixAnime_v1.safetensors
model_type EPS
UNet ADM Dimension 0
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['alphas_cumprod', 'alphas_cumprod_prev', 'betas', 'log_one_minus_alphas_cumprod', 'model_ema.decay', 'model_ema.num_updates', 'posterior_log_variance_clipped', 'posterior_mean_coef1', 'posterior_mean_coef2', 'posterior_variance', 'sqrt_alphas_cumprod', 'sqrt_one_minus_alphas_cumprod', 'sqrt_recip_alphas_cumprod', 'sqrt_recipm1_alphas_cumprod'])
To load target model SD1ClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 23007.9990234375
[Memory Management] Model Memory (MB) = 454.2076225280762
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 21529.791400909424
Moving model(s) has taken 0.17 seconds
Model loaded in 4.9s (calculate hash: 3.3s, forge load real models: 0.8s, load VAE: 0.3s, calculate empty prompt: 0.5s).
To load target model AutoencoderKL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 22587.94091796875
[Memory Management] Model Memory (MB) = 159.55708122253418
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 21404.383836746216
Moving model(s) has taken 0.12 seconds
Traceback (most recent call last):
File "D:\FORGE\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "D:\FORGE\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, self.kwargs)
File "D:\FORGE\stable-diffusion-webui-forge\modules\img2img.py", line 234, in img2img_function
processed = modules.scripts.scripts_img2img.run(p, args)
File "D:\FORGE\stable-diffusion-webui-forge\modules\scripts.py", line 785, in run
processed = script.run(p, script_args)
File "D:\FORGE\stable-diffusion-webui-forge\scripts\img2imgalt.py", line 216, in run
processed = processing.process_images(p)
File "D:\FORGE\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "D:\FORGE\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\FORGE\stable-diffusion-webui-forge\scripts\img2imgalt.py", line 173, in sample_extra
lat = (p.init_latent.cpu().numpy() 10).astype(int)
TypeError: Got unsupported ScalarType BFloat16
Got unsupported ScalarType BFloat16
Error completing request
* Arguments: ('task(8vtb2o7d4h4wvdl)', 0, 'anime screencap, medieval man with long hair, masterpiece, best quality, illustration,', '(worst quality, bad quality:1.4)', [], <PIL.Image.Image image mode=RGBA size=1024x1024 at 0x221A61AA860>, None, None, None, None, None, None, 20, 'Euler', 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0.0, 1024, 1024, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x00000221A61AA740>, 1, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='control_v11f1e_sd15_tile [a371b31b]', weight=0.5, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='ControlNet is more important', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, ' CFG Scale
should be 2 or lower.', True, False, '', '', False, 25, False, 1, 0, True, 4, 0.5, 'Linear', 'None', '
2024-03-20 21:39:00,932 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE
2024-03-20 21:39:01,050 - ControlNet - INFO - Using preprocessor: None
2024-03-20 21:39:01,050 - ControlNet - INFO - preprocessor resolution = 1024
Warning torch.load doesn't support weights_only on this pytorch version, loading unsafely.
[] []
2024-03-20 21:39:03,926 - ControlNet - INFO - Current ControlNet ControlNetPatcher: D:\FORGE\stable-diffusion-webui-forge\models\ControlNet\control_v11f1e_sd15_tile.pth
2024-03-20 21:39:04,327 - ControlNet - INFO - ControlNet Method None patched.
To load target model BaseModel
To load target model ControlNet
Begin to load 2 models
[Memory Management] Current Free GPU Memory (MB) = 22430.1904296875
[Memory Management] Model Memory (MB) = 1639.4137649536133
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 19766.776664733887
[Memory Management] Current Free GPU Memory (MB) = 20777.68798828125
[Memory Management] Model Memory (MB) = 689.0852355957031
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 19064.602752685547
Moving model(s) has taken 0.72 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 16/16 [00:03<00:00, 5.26it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 16/16 [00:03<00:00, 4.81it/s]
2024-03-20 21:39:21,217 - ControlNet - INFO - ControlNet Input Mode: InputMode.SIMPLE██| 16/16 [00:03<00:00, 5.89it/s]
2024-03-20 21:39:21,286 - ControlNet - INFO - Using preprocessor: None
2024-03-20 21:39:21,286 - ControlNet - INFO - preprocessor resolution = 1024
2024-03-20 21:39:21,306 - ControlNet - INFO - Current ControlNet ControlNetPatcher: D:\FORGE\stable-diffusion-webui-forge\models\ControlNet\control_v11f1e_sd15_tile.pth
Traceback (most recent call last):
File "D:\FORGE\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
task.work()
File "D:\FORGE\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
self.result = self.func(*self.args, self.kwargs)
File "D:\FORGE\stable-diffusion-webui-forge\modules\img2img.py", line 234, in img2img_function
processed = modules.scripts.scripts_img2img.run(p, args)
File "D:\FORGE\stable-diffusion-webui-forge\modules\scripts.py", line 785, in run
processed = script.run(p, script_args)
File "D:\FORGE\stable-diffusion-webui-forge\scripts\img2imgalt.py", line 216, in run
processed = processing.process_images(p)
File "D:\FORGE\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
res = process_images_inner(p)
File "D:\FORGE\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\FORGE\stable-diffusion-webui-forge\scripts\img2imgalt.py", line 173, in sample_extra
lat = (p.init_latent.cpu().numpy() 10).astype(int)
TypeError: Got unsupported ScalarType BFloat16
Got unsupported ScalarType BFloat16
Error completing request
* Arguments: ('task(hjo8wygy9wrowuk)', 0, 'anime screencap, medieval man with long hair, masterpiece, best quality, illustration,', '(worst quality, bad quality:1.4)', [], <PIL.Image.Image image mode=RGBA size=1024x1024 at 0x221A6B12290>, None, None, None, None, None, None, 20, 'Euler', 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0.0, 1024, 1024, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x00000221A6110070>, 1, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, -1, False, -1, 0, 0, 0, ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=True, module='None', model='control_v11f1e_sd15_tile [a371b31b]', weight=0.5, image=None, resize_mode='Crop and Resize', processor_res=1024, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='ControlNet is more important', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, ' Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8 Will upscale the image by the selected scale factor; use width and height sliders to set tile sizeCFG Scale
should be 2 or lower.', True, False, '', '', False, 25, False, 1, 0, True, 4, 0.5, 'Linear', 'None', '
Checklist
What happened?
Script does not work with any SDXL checkpoint. This tool is honestly one of the best tools to date for animation. You can use it to prestylize frames then send them through AnimateDiff for clean up. It's incredible and DESPERATLY NEEDS TO WORK FOR SDXL <3
Steps to reproduce the problem
1) Unroll Scripts tab in img2img 2) activate Img2Img alternative test 3) Press generate
What should have happened?
It's my understanding that this script flips the sigmas such that the diffusion process runs in reverse to generate noise from a source image. Works perfectly fine with 1.5 checkpoints and produces incredible outputs. Works fine in ComfyUI but unfortunately comfy cannot compete with Forge / auto1111 's image generation and as such having this script work for SDXL would allow me to pre process all of my images with Forge instead of comfy.
What browsers do you use to access the UI ?
No response
Sysinfo
sysinfo-2024-03-21-03-21.json
Console logs
Additional information
No response