kousw / stable-diffusion-webui-daam

DAAM for Stable Diffusion Web UI
Other
158 stars 30 forks source link

Does not work for Stable Diffusion Webui Forge #31

Open benjamin-bertram opened 5 months ago

benjamin-bertram commented 5 months ago

Forge works with forge_clip.CLIP_SD_15_L instead of sd_hijack_clip.FrozenCLIPEmbedderWithCustomWords; I tried to rewrite it, but I do not know quite understand of forge_clip.CLIP_SD15 works to really make it work. I just get errors all the time. As you also implemented SDXL support, you might have a better understanding of forge_clip.CLIP_SD15 - I tried to reproduce your commits on SDXL with forge but to no avail.

kousw commented 5 months ago

Forge is a good tool, so I'll consider supporting it. However, although I haven't seen the implementation yet, I've heard that forge includes a mechanism that makes it easy to hook into unet, so I have a feeling that clips alone won't work correctly.

benjamin-bertram commented 5 months ago

It does not use the sd-hijack-clip, but rather an own fork, this is the line where your script throws me an error (not supported embedder): https://github.com/lllyasviel/stable-diffusion-webui-forge/blob/d81e353d8928147bbd973068d0efbb2802affe0f/modules_forge/forge_clip.py#L16

kousw commented 5 months ago

It works on my end, but I still need to make some changes around attention, and it is no longer compatible with the original webui, so I'm wondering how to update it. I've pushed it to 'forge' branch for now.

benjamin-bertram commented 5 months ago

Nice, thanks. It throws me an error, But I cannot really see where it comes from:

Traceback (most recent call last): File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 37, in loop task.work() File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 26, in work self.result = self.func(*self.args, self.kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules/txt2img.py", line 111, in txt2img_function processed = processing.process_images(p) File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules/processing.py", line 750, in process_images res = process_images_inner(p) File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules/processing.py", line 921, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules/processing.py", line 1276, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules/sd_samplers_kdiffusion.py", line 251, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules/sd_samplers_common.py", line 263, in launch_sampling return func() File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules/sd_samplers_kdiffusion.py", line 251, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, extra_params_kwargs)) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] s_in, extra_args) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules/sd_samplers_cfg_denoiser.py", line 182, in forward denoised = forge_sampler.forge_sample(self, denoiser_params=denoiser_params, File "/Users/benjaminbertram/stable-diffusion-webui-forge/modules_forge/forge_sampler.py", line 82, in forge_sample denoised = sampling_function(model, x, timestep, uncond, cond, cond_scale, model_options, seed) File "/Users/benjaminbertram/stable-diffusion-webui-forge/ldm_patched/modules/samplers.py", line 289, in sampling_function cond_pred, uncond_pred = calc_cond_uncondbatch(model, cond, uncond, x, timestep, model_options) File "/Users/benjaminbertram/stable-diffusion-webui-forge/ldm_patched/modules/samplers.py", line 258, in calc_cond_uncond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) File "/Users/benjaminbertram/stable-diffusion-webui-forge/ldm_patched/modules/model_base.py", line 89, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, *extra_conds).float() File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/extensions/stable-diffusion-webui-daam/scripts/daam/trace.py", line 41, in _forward super_return = hk_self.monkey_super('forward', *args, *kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/extensions/stable-diffusion-webui-daam/scripts/daam/hook.py", line 65, in monkey_super return self.old_state[f'oldfn{fn_name}'](args, kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/ldm_patched/ldm/modules/diffusionmodules/openaimodel.py", line 867, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "/Users/benjaminbertram/stable-diffusion-webui-forge/ldm_patched/ldm/modules/diffusionmodules/openaimodel.py", line 55, in forward_timestep_embed x = layer(x, context, transformer_options) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/ldm_patched/ldm/modules/attention.py", line 620, in forward x = block(x, context=context[i], transformer_options=transformer_options) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/ldm_patched/ldm/modules/attention.py", line 447, in forward return checkpoint(self._forward, (x, context, transformer_options), self.parameters(), self.checkpoint) File "/Users/benjaminbertram/stable-diffusion-webui-forge/ldm_patched/ldm/modules/diffusionmodules/util.py", line 194, in checkpoint return func(inputs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/ldm_patched/ldm/modules/attention.py", line 547, in _forward n = self.attn2(n, context=context_attn2, value=value_attn2) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/extensions/stable-diffusion-webui-daam/scripts/daam/trace.py", line 284, in _forward out = hk_self._hooked_attention(self, q, k, v, batch_size, sequence_length, dim) File "/Users/benjaminbertram/stable-diffusion-webui-forge/extensions/stable-diffusion-webui-daam/scripts/daam/trace.py", line 369, in _hooked_attention maps = hk_self._up_sample_attn(attn_slice, value, factor) File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "/Users/benjaminbertram/stable-diffusion-webui-forge/extensions/stable-diffusion-webui-daam/scripts/daam/trace.py", line 241, in _up_sampleattn map = F.interpolate(map_, size=(h_fix, w_fix), mode='bicubic') File "/Users/benjaminbertram/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 4028, in interpolate return torch._C._nn.upsample_bicubic2d(input, output_size, align_corners, scale_factors) RuntimeError: "compute_indices_weights_cubic" not implemented for 'Half' "compute_indices_weights_cubic" not implemented for 'Half'

kousw commented 5 months ago

Probably a problem with Mac M1 to M3. To work around it, I added a heatmap interpolation mode to the forge branch. Default is 'bicublic' but try 'bilinear' or 'conv'.

image
benjamin-bertram commented 5 months ago

Here I first got

File "/Users/benjaminbertram/stable-diffusion-webui-forge/extensions/stable-diffusion-webui-daam/scripts/daam_script.py", line 187, in process_batch self.tracers = [trace(p.sd_model, p.height, p.width, context_size, interpolation_method = interpolation_mode)] TypeError: DiffusionHeatMapHooker.__init__() got an unexpected keyword argument 'interpolation_method'

so it thought it might throw an error because of interpolation_method = interpolation_mode and changed it to just interpolation_mode. But then I get

File "/Users/benjaminbertram/stable-diffusion-webui-forge/extensions/stable-diffusion-webui-daam/scripts/daam/trace.py", line 236, in _up_sample_attn value = value.permute(1, 0, 2) AttributeError: module 'torch.mps' has no attribute 'amp' module 'torch.mps' has no attribute 'amp'

I thus uncomment with torch.cuda.amp.autocast(dtype=torch.float32): but i still have the same error. I cannot trace where is another amp attribute in the code :/

It probably is something with M1 but before i switched to forge the script worked for me.

AlienRenders commented 4 months ago

Tried the forge branch and I get this error:

TypeError: UNetCrossAttentionHooker._forward() got an unexpected keyword argument 'value' UNetCrossAttentionHooker._forward() got an unexpected keyword argument 'value'

I also don't see any option for the interpolation mode.

AlienRenders commented 4 months ago

Also, there's no enable button. So I have to manually disable the extension.

kousw commented 4 months ago

I removed unnecessary amp codes on forge branch. I don't have an 'mps' environment so I can't test it, but normal operation is fine.

Don't forget to 'Apply and restart UI' when you update the code!

Zyin055 commented 3 months ago

Can confirm the forge branch works on A1111-1.8-Forge after doing a git checkout forge in the daam extension folder.

Don't forget to 'Apply and restart UI' when you update the code!

That didn't work for me after the branch checkout, had to restart the whole application.

gshawn3 commented 3 months ago

The forge branch also works perfectly for me in the latest Forge build. I'm on Windows/Nvidia.

Thank you, this is a fantastic tool.