sen-mao / SuppressEOT

Official Implementations "Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models" (ICLR2024)
https://arxiv.org/abs/2402.05375
40 stars 1 forks source link

KeyError: 'up_cross' #1

Closed one-zd closed 3 months ago

one-zd commented 3 months ago

RUN : python suppress_eot_w_nulltext.py --type Real-Image \ --prompt "A man with a beard wearing glasses and a hat in blue shirt" \ --image_path "./example_images/A man with a beard wearing glasses and a hat in blue shirt.jpg" \ --token_indices "[[4,5],[7],[9,10],]" \ --alpha "[1.,]" --cross_retain_steps "[.2,]"

ERROR: Traceback (most recent call last): File "E:\code\SuppressEOT-master\suppress_eot_w_nulltext.py", line 459, in main(args, stable) File "E:\code\SuppressEOT-master\suppress_eot_w_nulltext.py", line 414, in main image_inv, x_t = run_and_display(stable, [prompt], controller, latent=x_t, uncond_embeddings=uncond_embeddings, verbose=False) File "E:\code\SuppressEOT-master\suppress_eot_w_nulltext.py", line 336, in run_and_display images, x_t = text2image_ldm_stable(ldm_stable, prompts, controller, latent=latent, File "C:\Users\anaconda3\envs\eot\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "E:\code\SuppressEOT-master\suppress_eot_w_nulltext.py", line 277, in text2image_ldm_stable attention_maps = aggregate_attention(controller, 16, ["up", "down"], is_cross=True) File "E:\code\SuppressEOT-master\suppress_eot_w_nulltext.py", line 151, in aggregate_attention for item in attentionmaps[f"{location}{'cross' if is_cross else 'self'}"]: KeyError: 'up_cross'

Windows11,I debug the code and find that attention_maps is empty, may I ask how to correct it? Looking forward to your reply, thank you very much!

sen-mao commented 3 months ago

This issue is usually caused by the version of diffusers. Please refer to https://github.com/google/prompt-to-prompt/issues/57#issuecomment-1613729431 and try installing the latest version. In my experience, installing diffusers version 0.17.1 or newer should resolve the issue, as the Attention class has been rewritten.

one-zd commented 3 months ago

This issue is usually caused by the version of diffusers. Please refer to google/prompt-to-prompt#57 (comment) and try installing the latest version. In my experience, installing diffusers version 0.17.1 or newer should resolve the issue, as the Attention class has been rewritten.

Thank you very much for your reply. According to your answer, the problem has been successfully solved