[X] I have searched the existing issues and checked the recent builds/commits of both this extension and the webui
Have you read FAQ on README?
[X] I have updated WebUI and this extension to the latest version
What happened?
I'm trying to start using this one on latest Forge with a SDXL model in i2i tab, I basically just want to test it with default values first. This always happen
The "0" value in the last error's "broadcast shape" seems to be the no. of frames (I changed it a few times to find out).
I have no idea how long my prompt is but it's probably >75 tokens, I turn token count off
Steps to reproduce the problem
Set up extension like How to Use
With an SDXL model loaded upload an image in i2i, write the prompt
--xformers --cuda-malloc --administrator --autolaunch --theme dark
Console logs
*** Error running before_process: D:\webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py
Traceback (most recent call last):
File "D:\webui-forge\webui\modules\scripts.py", line 836, in before_process
script.before_process(p, *script_args)
File "D:\webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff.py", line 64, in before_process
motion_module.inject(p.sd_model, params.model)
File "D:\webui-forge\webui\extensions\sd-webui-animatediff\scripts\animatediff_mm.py", line 67, in inject
unet = sd_model.model.diffusion_model
AttributeError: 'StableDiffusionXL' object has no attribute 'model'
---
[Unload] Trying to free 7234.36 MB for cuda:0 with 0 models keep loaded ... Current free memory is 2318.96 MB ... Unload model KModel Done.
[Memory Management] Target: IntegratedAutoencoderKL, Free GPU: 5002.73 MB, Model Require: 319.11 MB, Previously Loaded: 0.00 MB, Inference Require: 1536.00 MB, Remaining: 3147.61 MB, All loaded to GPU.
Moving model(s) has taken 2.58 seconds
2024-09-23 04:30:20,695 - AnimateDiff - INFO - Randomizing init_latent according to [].
[Unload] Trying to free 3814.87 MB for cuda:0 with 0 models keep loaded ... Current free memory is 4718.35 MB ... Done.
[Memory Management] Target: JointTextEncoder, Free GPU: 4718.35 MB, Model Require: 1752.98 MB, Previously Loaded: 0.00 MB, Inference Require: 1536.00 MB, Remaining: 1429.37 MB, All loaded to GPU.
Moving model(s) has taken 0.49 seconds
[Unload] Trying to free 1536.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 2964.76 MB ... Done.
[Unload] Trying to free 7902.16 MB for cuda:0 with 0 models keep loaded ... Current free memory is 2963.90 MB ... Unload model IntegratedAutoencoderKL Current free memory is 3283.02 MB ... Unload model JointTextEncoder Done.
[Memory Management] Target: KModel, Free GPU: 5036.00 MB, Model Require: 4897.05 MB, Previously Loaded: 0.00 MB, Inference Require: 1536.00 MB, Remaining: -1397.05 MB, CPU Swap Loaded (blocked method): 2213.28 MB, GPU Loaded: 2683.77 MB
Moving model(s) has taken 1.59 seconds
Traceback (most recent call last):
File "D:\webui-forge\webui\modules_forge\main_thread.py", line 30, in work
self.result = self.func(*self.args, **self.kwargs)
File "D:\webui-forge\webui\modules\img2img.py", line 250, in img2img_function
processed = process_images(p)
File "D:\webui-forge\webui\modules\processing.py", line 817, in process_images
res = process_images_inner(p)
File "D:\webui-forge\webui\modules\processing.py", line 960, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\webui-forge\webui\modules\processing.py", line 1790, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "D:\webui-forge\webui\modules\sd_samplers_kdiffusion.py", line 147, in sample_img2img
xi = self.model_wrap.predictor.noise_scaling(sigma_sched[0], noise, x, max_denoise=False)
File "D:\webui-forge\webui\backend\modules\k_prediction.py", line 84, in noise_scaling
noise += latent_image
RuntimeError: output with shape [1, 4, 152, 104] doesn't match the broadcast shape [0, 4, 152, 104]
output with shape [1, 4, 152, 104] doesn't match the broadcast shape [0, 4, 152, 104]
Additional information
I dunno if this is an issue with Forge in general (heard it has huge changes with the way it loads models, deals with latents, etc.), so just tell me if it's indeed a Forge issue and if you'd add support for us.
Is there an existing issue for this?
Have you read FAQ on README?
What happened?
I'm trying to start using this one on latest Forge with a SDXL model in i2i tab, I basically just want to test it with default values first. This always happen
The "0" value in the last error's "broadcast shape" seems to be the no. of frames (I changed it a few times to find out). I have no idea how long my prompt is but it's probably >75 tokens, I turn token count off
Steps to reproduce the problem
What should have happened?
You guess it
Commit where the problem happens
Forge: https://github.com/lllyasviel/stable-diffusion-webui-forge/commit/95b54a27f1df591e0eceb46c0fea70b68327c26c Extension: https://github.com/continue-revolution/sd-webui-animatediff/commit/a88e88912bcbae0531caccfc50fd639f6ea83fd0
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
Console logs
Additional information
I dunno if this is an issue with Forge in general (heard it has huge changes with the way it loads models, deals with latents, etc.), so just tell me if it's indeed a Forge issue and if you'd add support for us.