Open Rajesh1215 opened 1 month ago
Hi @Rajesh1215 ,
Please recheck your version of diffusers package. Version 0.9.0 should work well.
Hope this helps!
thank you i will check that
@WindVChen
i have changed diffusers version but i got different error
Traceback (most recent call last):
File "/content/Diff-Harmonization/main.py", line 3, in
this is my colab file https://colab.research.google.com/drive/1mqu757wKRyvU8nuUDv32eDRPd1J4fu4-?authuser=1#scrollTo=-wCXjcP-eZXr
coulld you please help me to solve this error
please help me solving this issue Optimize_text_embed: 0% 0/49 [00:00<?, ?it/s] Traceback (most recent call last): File "/content/Diff-Harmonization/main.py", line 306, in
harmon_fun(composite_image, prompts, ldm_stable, diffusion_steps, guidance=guidance, generator=generator,
File "/content/Diff-Harmonization/main.py", line 46, in run_harmonization_no_evaluator
outimg, = diff_harmon.run(diffusion_model, prompts[0], controller, latent=x_t,
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/content/Diff-Harmonization/diff_harmon.py", line 480, in run
constraint_text_emb = attention_constraint_text_optimization(init_prompt, model, mask, latent,
File "/content/Diff-Harmonization/diff_harmon.py", line 367, in attention_constraint_text_optimization
attention_map_fg = aggregate_attention(prompt, controller, size // 32, ("up", "down"), True, 0).cuda()
File "/content/Diff-Harmonization/utils.py", line 115, in aggregate_attention
for item in attentionmaps[f"{location}{'cross' if is_cross else 'self'}"]:
KeyError: 'up_cross'