-
Great job! I wonder if you could kindly open-source the code for visualizing the attention map? Looking forward to your response! Thanks so much!
-
-
### Description of the bug
Not really a bug. But the scrolling in the layer view used to be a lot better.
I am not sure if this issue is particular to diferent mice models, but it used to be that fo…
-
I'm aware that you plan to add compatibility slowly, but I just wanted to bring [Arts and Crafts](https://modrinth.com/mod/artsandcrafts) to your attention for eventual compatibility.
-
[Flash attention 3](https://tridao.me/blog/2024/flash3/) makes use of new features of the Hopper architecture.
- (async) WGMMA
- TMA
- overlap softmax
Are these all things that can currently (…
-
Hello Author:
In the CrossAttention class in the utils.py file, there is only one input parameter x, which actually computes Self-Attention. Is your code inconsistent with the content of your paper?
-
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 1024 and using 5 heads.
…
-
Great work. Thanks for sharing.
In the paper it is said that the shape of the adjacancy matrix is (n+1)*(n+1), which sould be 22,22 (21 for joints and 1 for food contact). However, in your implementa…
-
Thanks for your great work!
In Tab4, you showed the perfomance of finetuning GroundingDino and GroundingREC.
I wonder how to perform zero-shot countiong on fsc147 using GroundingREC.
You made two c…
zaqai updated
3 weeks ago
-
Hi,
The fact that it's possible to create arbitrary score mod / mask mod patterns is really powerful!
I'm wondering if there is any way to reason about the efficiency of different masking patter…