AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
139.85k stars 26.5k forks source link

[Bug]: Fix IMG2IMG Alternative Test Script to Work with SDXL #15341

Closed inferno46n2 closed 5 months ago

inferno46n2 commented 6 months ago

Checklist

What happened?

Script does not work with any SDXL checkpoint. This tool is honestly one of the best tools to date for animation. You can use it to prestylize frames then send them through AnimateDiff for clean up. It's incredible and DESPERATLY NEEDS TO WORK FOR SDXL!

image

Steps to reproduce the problem

1) Unroll Scripts tab 2) activate Img2Img alternative test 3) Press generate

What should have happened?

Self explanatory.

What browsers do you use to access the UI ?

Mozilla Firefox, Google Chrome, Microsoft Edge

Sysinfo

sysinfo-2024-03-21-02-45.json

Console logs

venv "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: v1.8.0
Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
ControlNet preprocessor location: C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-03-20 20:42:25,629 - ControlNet - INFO - ControlNet v1.1.441
2024-03-20 20:42:25,946 - ControlNet - INFO - ControlNet v1.1.441
Loading weights [fd07b6f6dd] from C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\models\Stable-diffusion\15\realcartoonPixar_v5.safetensors
Creating model from config: C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\configs\v1-inference.yaml
2024-03-20 20:42:26,277 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 15.8s (prepare environment: 3.7s, import torch: 4.0s, import gradio: 1.5s, setup paths: 1.6s, initialize shared: 0.1s, other imports: 1.2s, list SD models: 2.3s, load scripts: 1.0s, initialize extra networks: 0.1s, create ui: 0.3s, gradio launch: 0.1s).
Applying attention optimization: Doggettx... done.
Model loaded in 2.0s (create model: 0.5s, apply weights to model: 1.2s, load textual inversion embeddings: 0.1s, calculate empty prompt: 0.1s).
Reusing loaded model 15\realcartoonPixar_v5.safetensors [fd07b6f6dd] to load XL\aamXLAnimeMix_v10.safetensors [d48c2391e0]
Loading weights [d48c2391e0] from C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\models\Stable-diffusion\XL\aamXLAnimeMix_v10.safetensors
Creating model from config: C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: Doggettx... done.
Model loaded in 5.7s (create model: 0.5s, apply weights to model: 4.7s, apply half(): 0.1s, move model to device: 0.1s).
2024-03-20 20:43:47,210 - ControlNet - INFO - unit_separate = False, style_align = False
*** Error running process: C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
    Traceback (most recent call last):
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\modules\scripts.py", line 784, in process
        script.process(p, *script_args)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1279, in process
        self.controlnet_hack(p)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1264, in controlnet_hack
        self.controlnet_main_entry(p)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 909, in controlnet_main_entry
        Script.check_sd_version_compatible(unit)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 834, in check_sd_version_compatible
        raise Exception(f"ControlNet model {unit.model}({cnet_sd_version}) is not compatible with sd model({sd_version})")
    Exception: ControlNet model control_v11f1e_sd15_tile [a371b31b](StableDiffusionVersion.SD1x) is not compatible with sd model(StableDiffusionVersion.SDXL)

---
  0%|                                                                                           | 0/25 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(5xvcgrrwcpcxpfb)', 0, 'adorable, medival man with long hair, ((Big Pixar Eyes)), illustration,<lora:more_details:0.5>,  <lora:add_saturation:2>', 'BadDream,  verybadimagenegative_v1.3,  EasyNegativeV2,  fcNeg-neg,  UnrealisticDream,', [], <PIL.Image.Image image mode=RGBA size=1024x1024 at 0x1B7A76694E0>, None, None, None, None, None, None, 20, 'Euler', 4, 0, 1, 1, 1, 7.5, 1.5, 0.5, 0.0, 1024, 1024, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x000001B77CC3E1A0>, 1, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, 1764580089, False, -1, 0, 0, 0, UiControlNetUnit(enabled=True, module='none', model='control_v11f1e_sd15_tile [a371b31b]', weight=0.5, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='HiResFixOption.BOTH', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=True, module='dw_openpose_full', model='control_v11p_sd15_openpose [cab727d4]', weight=0.7, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='HiResFixOption.BOTH', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=True, module='normal_bae', model='control_v11p_sd15_normalbae [316696f1]', weight=0.25, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', inpaint_crop_input_image=True, hr_option='HiResFixOption.BOTH', save_detected_map=True, advanced_weighting=None), '* `CFG Scale` should be 2 or lower.', True, False, '', '', False, 25, False, 1, 0, True, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\modules\img2img.py", line 233, in img2img
        processed = modules.scripts.scripts_img2img.run(p, *args)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\modules\scripts.py", line 766, in run
        processed = script.run(p, *script_args)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\scripts\img2imgalt.py", line 216, in run
        processed = processing.process_images(p)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\modules\processing.py", line 785, in process_images
        res = process_images_inner(p)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\modules\processing.py", line 921, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\scripts\img2imgalt.py", line 188, in sample_extra
        rec_noise = find_noise_for_image_sigma_adjustment(p, cond, uncond, cfg, st)
      File "C:\Users\infer\OneDrive\Documents\Auto1111\stable-diffusion-webui\scripts\img2imgalt.py", line 85, in find_noise_for_image_sigma_adjustment
        cond_in = torch.cat([uncond, cond])
    TypeError: expected Tensor as element 0 in argument 0, but got dict

---

Additional information

No response

catboxanon commented 5 months ago

Duplicate of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12381