Open tigerlittle1 opened 4 months ago
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Is your feature request related to a problem? Please describe. When i feed the
out_dim
argument in__init__
in Attention block it will raise the shape error, because thequery_dim != out_dim
. In this case, the following code try to keep the given channel ofhidden_states
.Describe the solution you'd like. I suggest the change of code base : https://github.com/huggingface/diffusers/blob/b69fd990ad8026f21893499ab396d969b62bb8cc/src/diffusers/models/attention_processor.py#L1393 to
hidden_states = hidden_states.transpose(-1, -2).reshape(batch_size, -1, height, width)
, then it will respect the channel ofhidden_states
. Maybe I will make a PR later.Describe alternatives you've considered. None.
Additional context. None.