Closed UniversalParadox closed 5 months ago
Same error. Mine was working yesterday morning until I updated A1111 to the latest version - v1.9.0
I installed a fresh copy of 1.9 and confirmed it's still an issue. I then installed a fresh copy of 1.8 and AD works.
Same error after 1.9 update.
I can reproduce. I will fix this problem as soon as I can.
I don't know why but a global variable seems to be changing but I don't know why
The reason is that https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15423/ This PR from A1111 fucked things up.
Although I've fixed this error I will discuss with AUTO to see how we should deal with it in the future.
just updated mine today this morning and now im getting this? with default settings.
@Disorbs You are getting what? Have you updated both webui and ad?
the error still there even I updated the latest version SD web ui 1.9, I am using instance on Vast.ai, used their Docker template, any idea to fix it ? Thank you!@continue-revolution
*Console Log*** File "/workspace/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img processed = processing.process_images(p) File "/workspace/stable-diffusion-webui/modules/processing.py", line 845, in process_images res = process_images_inner(p) File "/workspace/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 48, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, kwargs) File "/workspace/stable-diffusion-webui/modules/processing.py", line 981, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "/workspace/stable-diffusion-webui/modules/processing.py", line 1328, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "/workspace/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 218, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "/workspace/stable-diffusion-webui/modules/sd_samplers_common.py", line 272, in launch_sampling return func() File "/workspace/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 218, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/workspace/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m denoised = model(x, sigmas[i] s_in, extra_args) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) File "/workspace/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 237, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, kwargs) File "/workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/scripts/animatediff_infv2v.py", line 163, in mm_sd_forward out = self.original_forward( File "/workspace/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), *kwargs) File "/workspace/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps return self.inner_model.apply_model(args, kwargs) File "/workspace/stable-diffusion-webui/modules/sd_hijack_utils.py", line 18, in setattr(resolved_obj, func_path[-1], lambda *args, kwargs: self(*args, *kwargs)) File "/workspace/stable-diffusion-webui/modules/sd_hijack_utils.py", line 32, in call return self.__orig_func(args, kwargs) File "/workspace/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, cond) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, kwargs) File "/workspace/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) File "/workspace/stable-diffusion-webui/modules/sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, args, kwargs) File "/workspace/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 797, in forward h = module(h, emb, context) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) File "/workspace/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 86, in forward x = layer(x) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, kwargs) File "/workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 136, in forward return self.temporal_transformer(x) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, kwargs) File "/workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 194, in forward hidden_states = block(hidden_states) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, *kwargs) File "/workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 248, in forward hidden_states = attention_block(norm_hidden_states) + hidden_states File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/opt/micromamba/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, **kwargs) File "/workspace/stable-diffusion-webui/extensions/sd-webui-animatediff/motion_module.py", line 334, in forward video_length = mm_animatediff.ad_params.batch_size AttributeError: 'NoneType' object has no attribute 'batch_size'
Why is this marked closed? I am getting this error as well even with the latest code.
AttributeError: 'NoneType' object has no attribute 'batch_size'
edit... I used your fix for 1.9 and that eliminated the error and I am able to run the script
Is there an existing issue for this?
Have you read FAQ on README?
What happened?
Unable to generate any GIFs
Steps to reproduce the problem
What should have happened?
GIF generates
Commit where the problem happens
version: v1.9.0 • python: 3.10.11 • torch: 2.1.2+cu121 • xformers: 0.0.23.post1 • gradio: 3.41.2 • checkpoint: c35e1054c0
What browsers do you use to access the UI ?
No response
Command Line Arguments
Console logs
Additional information
No response