mit-han-lab / fastcomposer

[IJCV] FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
https://fastcomposer.mit.edu
MIT License
641 stars 36 forks source link

enable_xformers_memory_efficient_attention is not supported #18

Open JarvisFei opened 1 year ago

JarvisFei commented 1 year ago

File "fastcomposer/fastcomposer/model.py", line 571, in forward localization_loss = get_object_localization_loss( File "fastcomposer/model.py", line 416, in get_object_localization_loss return loss / num_layers ZeroDivisionError: division by zero

May I ask if there is a solution to this problem when i am using enable_xformers_memory_efficient_attention? @Guangxuan-Xiao

JarvisFei commented 1 year ago

in the model.py file, why the code is like this:

if isinstance(module.processor, AttnProcessor2_0): module.set_processor(AttnProcessor())

how can I accelerate the training process with torch.complie() when I am using PyTorch 2.0

xilanhua12138 commented 2 months ago

same problem, did you solve it ? can you share?