issues
search
AmeenAli
/
HiddenMambaAttn
Official PyTorch Implementation of "The Hidden Attention of Mamba Models"
204
stars
12
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Problem on Calculation Time of fine-tuned model?
#19
chengyi-chris
opened
1 month ago
0
Errors when running code-Selective_scan_cuda module not found
#18
manuragkhullar
opened
2 months ago
0
About Attention Rollout
#17
szubing
opened
3 months ago
1
Cannot install from source
#16
chokevin8
opened
3 months ago
3
Is there any code for analysing NLP models?
#15
ChaoqiLiang
opened
3 months ago
0
About Train Time
#14
xiequan277
closed
4 months ago
1
cross attention
#13
zhending111
closed
4 months ago
2
How can I get model.layers[n].mixer.xai_b?
#12
DingjieFu
closed
5 months ago
0
Why A^- is a diagonal matrix?
#11
CacatuaAlan
opened
6 months ago
1
Attention without for loops
#10
AliYoussef97
opened
7 months ago
3
2
#9
HiccupFL
closed
7 months ago
0
How can I get the "NLP Qualitative Results" in paper?
#8
AndssY
closed
7 months ago
2
Attention Computation Question
#7
ChrisSsak
closed
4 months ago
4
Mamba attention matrix aggregation.
#6
patronum08
closed
8 months ago
2
Could you please provide the download link for the file named 'model_type = 'vim_small_patch16_224_bimambav2_final_pool_mean_abs_pos_embed_with_midclstok_div2'?"
#5
xia-zhe
closed
8 months ago
0
rollout calculation
#4
sivaji123256
closed
8 months ago
1
Fix readme typo
#3
erjanmx
closed
8 months ago
0
Type Error
#2
sivaji123256
closed
9 months ago
6
Remove `causal_conv1d` req in mamba setup.py.
#1
bhoov
closed
9 months ago
0