-
Hello, could you tell me how Mamba performs cross-attention operations on images within a batch? I'm not sure if you've researched this area.
-
I'm working on an attention backend based on `xformers` to improve performance on V100; is there anything I need to be aware of when doing so or should it be straightforward?
-
Hi,
Thank you for your great work! It's really helpful in my research.
I'm interested in using NATTEN with linear attention, which can be simplified as ```(q@k) @ v -> q@(k@v)```. This approach …
-
Hello, I have recently implemented a cross attention application with multi-modal fusion, but because the image resolution is too large, cuda OOM occurs when calculating q and k, so I found your paper…
-
Hi,
I got this issue on mac:
```
/custom_nodes/comfyui-oms-diffusion/oms_diffusion_nodes.py", line 149, in get_area_and_mult
conditioning["c_attn_stored_area"] = AttnStoredExtra(torch.te…
-
"We do not have so many requests, actually.
We also have some internal discussions, but there are a lot of alternatives for the faster (lightweight) encoder and Squeezeformer does not come to a hig…
-
Maybe it is to niche, but we would be interested in a fused circular windowed attention.
Currently, we pad q,k,v, use the fused kernel and crop.
It would either help if k,v could have different di…
-
**Describe the bug**
In corfunc-technical.rst:
- the Stribeck polydispersity needs defining
- there are some incorrectly formatted equations at the foot of the page
-
hi, I have attention_mask problem mismatch in the cross attenstion
can you please explain this line:
requires_attention_mask = "encoder_outputs" not in model_kwargs ?
why is comed after this:
…
-