-
Hello, I have recently implemented a cross attention application with multi-modal fusion, but because the image resolution is too large, cuda OOM occurs when calculating q and k, so I found your paper…
-
Hello,
Thank you so much for your great work and codebase!
I would appreciate your clarifications on a few items.
1) From within ```TextToVideoSDPipelineCall.py```, at this [line](https://g…
-
Hi there,
Sorry if this is a stupid issue but I was wondering if it would be possible to apply Ring Attention to Cross Attention? I was thinking of using RingFlashAttentionCUDAFunction directly but…
-
Hi,
Thank you for releasing your code. I would like to understand where is the decoupled cross-attention being used in the code, as stated in the paper. In the code, I only say concatenation. I wou…
-
Hi,
thank you for your indepth analysis,
could you open source how to compute the cross-attention Difference code given in Figue 2 ?
-
if cross_attention_dim is None:
attn_procs[name] = Consistent_AttProcessor(
hidden_size=hidden_size, cross_attention_dim=cross_attention_dim, rank=self.lora_rank,…
-
It seems they are somehow similar and could you please describe the difference between them? Thank you!
-
Hi, I am aware that implementation and source code of kernels like FMHA is not released. However, is there a guide or some reference I can use to create custom kernels related to attention? I would id…
-
ptp_utils.py imports CrossAttention from diffusers/models/cross_attention.py:
```from diffusers.models.cross_attention import CrossAttention```
But as of July 26 2023, cross_attention.py is depr…
-
During the reproduction process, I also wanted to observe the shape of the cross-attention map generation, and then I performed a search for relevant information and made common sense changes to the c…