dvruette / sd-webui-fabric

MIT License
401 stars 23 forks source link

Fabric doesn't work (?) - errors during generation #31

Closed zethriller closed 11 months ago

zethriller commented 1 year ago

Not sure it actually do something, hence the title, because errors show up during generation.

[FABRIC] Patching U-Net forward pass... (7 likes, 5 dislikes)
*** Error running process: F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\fabric.py
    Traceback (most recent call last):
      File "F:\automatic1111\stable-diffusion-webui\modules\scripts.py", line 619, in process
        script.process(p, *script_args)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\fabric.py", line 475, in process
        patch_unet_forward_pass(p, unet, params)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 75, in patch_unet_forward_pass
        null_ctx = p.sd_model.get_learned_conditioning([""]).to(devices.device, dtype=devices.dtype_unet)
    AttributeError: 'dict' object has no attribute 'to'

Steps to reproduce: generate a random batch with Fabric disabled. Pick some you like, others you dislike and tag them accordingly. Start generating another batch with it enabled. This will show up multiple times before the main generation steps.

Models in use: Checkpoint: sdxlNuclearGeneralPurposeSemi_v10.safetensors VAE: sdxl_vae.safetensors SD Unet: none

Startup options:

set COMMANDLINE_ARGS=--xformers --no-half-vae --gradio-img2img-tool color-sketch --deepdanbooru --update-check
set REQS_FILE=F:\automatic1111\stable-diffusion-webui\requirements.txt
set GIT_SSL_NO_VERIFY=true
set PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512

NVidia RTX 4060 Ti, using fabric commit 929ac972 in A1111 1.6.0

I don't know if you need other details, feel free to ask.

Thank you.

dvruette commented 1 year ago

It seems like you're using a SDXL model, but unfortunately, FABRIC is currently incompatible with SDXL. The issue you're running into is likely caused by that, so switching to a SD 1.x or 2.x model should solve it.

zethriller commented 1 year ago

Oh ok, didn't know that. Testing with a 1.5 model, i have something different. This is awfully long so i'll add ellipsises for clarity.

[FABRIC] Patching U-Net forward pass... (12 likes, 8 dislikes)
  0%|                                                                                                               | 0/30 [00:04<?, ?it/s]
*** Error completing request
*** Arguments: ('task(bb5e0un1l1v319t)'

[...]

    Traceback (most recent call last):
      File "F:\automatic1111\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "F:\automatic1111\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-prompt-history\lib_history\image_process_hijacker.py", line 21, in process_images
        res = original_function(p)
      File "F:\automatic1111\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "F:\automatic1111\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\marking.py", line 29, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 188, in forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 229, in new_forward
        out = self._fabric_old_forward(x, timesteps, context, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
        x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 213, in patched_attn1_forward
        out_cond = attention_with_feedback(x[cond_ids], context[cond_ids], cached_hs[:num_pos], pos_weight)  # (n_cond, seq, dim)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 209, in attention_with_feedback
        return weighted_attention(attn1, attn1._fabric_old_forward, _x, ctx, weights, **kwargs)  # (n_cond, seq, dim)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 62, in weighted_attention
        return weighted_attn_fn(self, x, context=context, weights=weights, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 197, in weighted_xformers_attention_forward
        out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=attn_bias, op=get_xformers_flash_attention_op(q, k, v))
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memory_efficient_attention
        return _memory_efficient_attention(
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _memory_efficient_attention
        return _memory_efficient_attention_forward(
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 310, in _memory_efficient_attention_forward
        out, *_ = op.apply(inp, needs_gradient=False)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 175, in apply
        out, lse, rng_seed, rng_offset = cls.OPERATOR(
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\_ops.py", line 502, in __call__
        return self._op(*args, **kwargs or {})
    RuntimeError: bias_4d_view.stride(0) overflows

Same thing, if i disable sd-webui-prompt-history:

[FABRIC] Patching U-Net forward pass... (10 likes, 5 dislikes)
  0%|                                                                                                               | 0/20 [00:05<?, ?it/s]
*** Error completing request
*** Arguments: ('task(orlebogplwb7dxk)'
[...]
    Traceback (most recent call last):
      File "F:\automatic1111\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "F:\automatic1111\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "F:\automatic1111\stable-diffusion-webui\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "F:\automatic1111\stable-diffusion-webui\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\marking.py", line 29, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 188, in forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 229, in new_forward
        out = self._fabric_old_forward(x, timesteps, context, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "F:\automatic1111\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
        x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 213, in patched_attn1_forward
        out_cond = attention_with_feedback(x[cond_ids], context[cond_ids], cached_hs[:num_pos], pos_weight)  # (n_cond, seq, dim)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 209, in attention_with_feedback
        return weighted_attention(attn1, attn1._fabric_old_forward, _x, ctx, weights, **kwargs)  # (n_cond, seq, dim)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 62, in weighted_attention
        return weighted_attn_fn(self, x, context=context, weights=weights, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 197, in weighted_xformers_attention_forward
        out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=attn_bias, op=get_xformers_flash_attention_op(q, k, v))
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memory_efficient_attention
        return _memory_efficient_attention(
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _memory_efficient_attention
        return _memory_efficient_attention_forward(
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 310, in _memory_efficient_attention_forward
        out, *_ = op.apply(inp, needs_gradient=False)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 175, in apply
        out, lse, rng_seed, rng_offset = cls.OPERATOR(
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\_ops.py", line 502, in __call__
        return self._op(*args, **kwargs or {})
    RuntimeError: bias_4d_view.stride(0) overflows
dvruette commented 1 year ago

Ok I haven’t seen that one before. Could you share some more details on how to reproduce? Does it happen for the most default of settings as well?

zethriller commented 1 year ago

There is something about stride() in issue #29 but i don't think it's the same exact thing.

So here's the process:

SD model: celestialMagic_v30.safetensors (this is a SD 1.5 one) VAE: blessed2.vae.pt (i think i renamed this one, can use blessed if you can't find it) UNet: None

dvruette commented 1 year ago

What about generation params, like width, height, hires fix, etc? Does it happen even with defaults like 512x512? Issues like this can unfortunately depend even on size.

I’ve noticed some xformers issues myself for certain image sizes, but haven’t gotten around to fixing them yet. This one looks different though.

zethriller commented 1 year ago

Unless i need a super-custom format i stick with 512x512, 512x768 and 768x512 for SD 1.5 - this one is portrait mode. Will generation settings from .txt files help you ?

Pretty female lightning mage, dark purple long hair, noble attire, ((black costume with gold accents)), crystal earrings, crystal on forehead, black eyes, purple cape, purple rings shoulder pieces, ((black tight pants)), beautiful detailed eyes, (1girl:1.3), otherwordly beauty, casting pose, spellcasting, fantastical atmosphere, ethereal glowing runes, (<lora:GlowingRunesAIv4:0.2>, GlowingRunes_purple), (moonlight), (stars), (((chaotic, ((fractal)), (infinity), highly complex, intricate details) background)), empowering allies, purple lighting, best quality, illustration, masterpiece, ultra detailed, ultra high res, sharp focus, <lora:LightningVFXV1:0.8>

Negative prompt: easynegative, negative_hand-neg, NG_DeepNegative_V1_75T, (((amputation))), average, bad, ((bad art)), ((bad_anatomy)), bad_hands, bad_perspective, bad_proportions, bad_quality, badly_drawn, blur, blurry, body out of frame, boring, broken_finger, canvas frame, censored, claw, claws, cloned face, cross-eye, deformation, ((deformed)), ((description)), disconnected, disconnected limbs, ((disfigured)), disgusting, distorted hands, distortion, double hands, doubled face, dull, ((duplicate)), elves_ears, error, extra_arm, extra_digit, extra_feet, extra_finger, extra_leg, extra_limbs, fault, fewer_digits, ((floating limbs)), ((frame)), fused_feet, fused_fingers, fused_limbs, gross proportions, grayscale, ((inaccurate body)), jpeg_artifacts, jpg_artifacts, ((logo)), long_feet, long_hand, long_neck, low_detail, low_quality, low_resolution, lowres, malfomed arms, malformed hands, malformed legs, (malformed limbs), mediocre, misplaced, missing_arm, missing_feet, missing_finger, missing_hand, missing_leg, missing_limbs, ((morbid)), (((mutated))), mutated hands, ((((mutated hands and fingers)))), mutated_hand, ((mutation)), ((mutilated)), no_detail, no_quality, no_resolution, noise, nonsense, normal_detail, normal_resolution, obese, out_of_frame, overlay logo, ((poorly drawn eyes)), ((poorly drawn face)), ((poorly drawn feet)), ((poorly drawn hands)), ((poorly drawn mouth)), poorly_drawn, random, rushed, scribble, (((signature))), (swapped feet and hands), ((text)), ((title)), (too many fingers), ugly, unfinished, uninteresting, watermark, watermarked, wrong, ((3girls)), ((multiple girls)), 1boy, lens flares, ((nipples)), ((heels)), ((heterochromia)), (((make-up))), ((blush)), ((blushing)), (doll), (doll face), skin spots, acnes, skin blemishes, age spot, double navel, scar, (muted arms), ((hused arms)), ((hused legs)), ((wings)), ((antenna)), ((big breasts)), (elf ears), (pointy ears), (red eyes), (selfie), ((catgirl)), silhouette, ((window frame)), asymetrical armor, ((holding crystal)), ((simple background)), crown

Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 2483984345, Face restoration: CodeFormer, Size: 512x768, Model hash: d6ace93220, Model: celestialMagic_v30, VAE hash: 63aeecb90f, VAE: blessed2.vae.pt, Denoising strength: 0.3, ADetailer model: face_yolov8n.pt, ADetailer confidence: 0.2, ADetailer dilate/erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer version: 23.9.1, Hires upscale: 2, Hires steps: 10, Hires upscaler: 4xUltrasharp_4xUltrasharpV10, Lora hashes: "GlowingRunesAIv4: 6abe47d58102, LightningVFXV1: 2777bba8d1d7", TI hashes: "easynegative: c74b4e810b03, negative_hand-neg: 73b524a2da12, ng_deepnegative_v1_75t: 54e7e4826d53", Version: v1.6.0
dvruette commented 1 year ago

Awesome, this should help a lot. Will try to reproduce on my end, if I manage to it will be easy to fix.

In the meantime, using FABRIC without any other plugins should have the highest chance of working as intended.

dvruette commented 11 months ago

Was able to reproduce and find a fix. It seems like it was caused by the large number of feedback images in conjunction with some unfortunate type casting that lead to extremely large strides, which xformers wasn't able to handle. Should be fixed with v0.6.3.

Woisek commented 6 months ago

It seems like you're using a SDXL model, but unfortunately, FABRIC is currently incompatible with SDXL. The issue you're running into is likely caused by that, so switching to a SD 1.x or 2.x model should solve it.

Any chance we get it working for SDXL soon, as probably more and more user using SDXL as their generation model?

dvruette commented 6 months ago

Yes, I've managed to fix the SDXL compatibility issue, it should now be supported in v0.6.4.

Woisek commented 6 months ago

Yes, I've managed to fix the SDXL compatibility issue, it should now be supported in v0.6.4.

That is fantastic! Will check it out, many thanks for your effort. 👍