Open Woisek opened 1 year ago
@Woisek maybe you watched my video. did you try with smaller number of words and verified it works? it says you have used 66 tokens. i dont know if there is any limit for this script to work :/
original authors of script here : https://github.com/castorini/daam
@FurkanGozukara Yes, I watched indeed your video, that's why I wanted to try it. When you say smaller amound of words ... do you refer to the prompt or to the attention heatmap input field?
@FurkanGozukara Yes, I watched indeed your video, that's why I wanted to try it. When you say smaller amound of words ... do you refer to the prompt or to the attention heatmap input field?
i think try both
you made it work any prompt and heatmap ?
Had a similar issue, try disabling Hires. fix. This worked for me. I also deleted the venv and restarted the GUI but I doubt this did anything.
Same issue here, no matter what I try RuntimeError: shape '[8, 1, 26, 17]' is invalid for input of size 3888
@FurkanGozukara
you made it work any prompt and heatmap ?
Yes, useing some other prompt did work. I have to look into that further.
Try using a square image and see if that helps. Sometimes restarting the webui resolves some of these as well.
Try using a square image and see if that helps.
It looks like, that this is indeed a/the solution. The creation of an 543 x 768 image with a rather long wording had not worked, setting it to 768 x 768 made it work. Very strange, but thanks for this tip ...
Could also try in multiples of 64, I just tried 512x768 and worked. Possibly power of 2 (8, 16, 32, 64) and so on.
Could also try in multiples of 64, I just tried 512x768 and worked. Possibly power of 2 (8, 16, 32, 64) and so on.
Great info
Saw this script presented on YT and installed it immediately. Unfortunately, I throws errors: daam run with context_size=77, token_count=66 0%| | 0/50 [00:00<?, ?it/s] Error completing request Arguments: ('task(g5pt7xujbnyk3p1)', "Futuristic Vintage Medium Shot 1920's Poster electronic (russian:1.5) vintage girl, (nice face), robot and office, 1820, unreal engine, cozy indoor lighting, artstation, detailed, cinematic,character design by mark ryden and pixar and hayao miyazaki, unreal 5, daz, hyperrealistic, octane render", '2heads, elongated body, 2faces, cropped image, out of frame, draft, deformed hands, signatures, big hair, big eyes, twisted fingers, double image, long neck, malformed hands, multiple heads, extra limb, ugly, poorly drawn hands, missing limb, disfigured, cut-off, kitsch, over saturated, grain, low-res, Deformed, blurry, bad anatomy, disfigured, poorly drawn face, mutation, mutated, floating limbs, disconnected limbs, out of focus, long body, disgusting, poorly drawn, mutilated, mangled, surreal, extra fingers, duplicate artefacts, morbid, gross proportions, missing arms, mutated hands, mutilated hands, cloned face, malformed limbs, missing legs, signature, watermark, heterochromia', [], 50, 0, True, False, 1, 1, 5, 2861848008.0, 2489686902.0, 0.2, 0, 0, False, 768, 543, True, 0.7, 2, 'ESRGAN_4x', 0, 0, 0, [], 0, False, 'keyword prompt', 'random', 'None', 'textual inversion first', "Futuristic,Vintage,Medium Shot,1920's Poster,electronic,(russian:1.5),vintage girl,robot and office", False, False, False, True, 'Auto', 0.5, 1, False, False, None, '', 'outputs', False, False, 'positive', 'comma', 0, False, False, '', 'Illustration', 'svg', True, True, False, 0.5, True, 16, True, 16, 1, '', 0, '', 0, '', True, False, False, False, 0, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', 'Euler a', 0.95, 0.75, 'zero', 'pos', 'linear', 0.01, 0.0, 0.75, None, 'Lanczos', 1, 0, 0) {} Traceback (most recent call last): File "I:\Super SD 2.0\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, kwargs)) File "I:\Super SD 2.0\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, *kwargs) File "I:\Super SD 2.0\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img processed = process_images(p) File "I:\Super SD 2.0\stable-diffusion-webui\modules\processing.py", line 486, in process_images res = process_images_inner(p) File "I:\Super SD 2.0\stable-diffusion-webui\modules\processing.py", line 628, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "I:\Super SD 2.0\stable-diffusion-webui\modules\processing.py", line 828, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "I:\Super SD 2.0\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 323, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "I:\Super SD 2.0\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 221, in launch_sampling return func() File "I:\Super SD 2.0\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 323, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "I:\Super SD 2.0\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func( args, kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, extra_args)
File "I:\Super SD 2.0\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, *kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 135, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": c_crossattn, "c_concat": [image_cond_in[a:b]]})
File "I:\Super SD 2.0\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(input, kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, *kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda args, kwargs: self(*args, kwargs))
File "I:\Super SD 2.0\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(args, kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, cond)
File "I:\Super SD 2.0\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(input, kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "I:\Super SD 2.0\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\extensions\stable-diffusion-webui-daam\scripts\daam\trace.py", line 41, in _forward
super_return = hk_self.monkey_super('forward', *args, *kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\extensions\stable-diffusion-webui-daam\scripts\daam\hook.py", line 65, in monkey_super
return self.old_state[f'oldfn{fn_name}'](args, kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "I:\Super SD 2.0\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "I:\Super SD 2.0\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, *kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 324, in forward
x = block(x, context=context[i])
File "I:\Super SD 2.0\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(input, kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 259, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 114, in checkpoint
return CheckpointFunction.apply(func, len(inputs), args)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 129, in forward
output_tensors = ctx.run_function(ctx.input_tensors)
File "I:\Super SD 2.0\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 263, in _forward
x = self.attn2(self.norm2(x), context=context) + x
File "I:\Super SD 2.0\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, *kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\extensions\stable-diffusion-webui-daam\scripts\daam\trace.py", line 277, in _forward
out = hk_self._hooked_attention(self, q, k, v, batch_size, sequence_length, dim)
File "I:\Super SD 2.0\stable-diffusion-webui\extensions\stable-diffusion-webui-daam\scripts\daam\trace.py", line 354, in _hooked_attention
maps = hk_self._up_sample_attn(attn_slice, value, factor)
File "I:\Super SD 2.0\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(args, **kwargs)
File "I:\Super SD 2.0\stable-diffusion-webui\extensions\stable-diffusion-webui-daam\scripts\daam\trace.py", line 237, in _up_sampleattn
map = map.unsqueeze(1).view(map.size(0), 1, h, w)
RuntimeError: shape '[8, 1, 95, 67]' is invalid for input of size 51456
Any suggestions?