issues
search
kssteven418
/
LTP
[KDD'22] Learned Token Pruning for Transformers
https://arxiv.org/abs/2107.00910
Apache License 2.0
91
stars
16
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Does attention mask reduce computation cost?
#14
brotherb
opened
9 months ago
0
Inference about hard pruning
#13
cynthia0114
opened
1 year ago
0
No mask used in evaluation process
#12
shawnricecake
opened
1 year ago
0
Cannot run with or without installing transformer.
#11
KevinHooah
opened
1 year ago
1
Why don't mask during Testing?
#10
sev777
opened
1 year ago
0
Some specified arguments are not used by the HfArgumentParser
#9
linxid
opened
2 years ago
4
question about the max seq length
#8
XueqiYang
opened
2 years ago
2
Where to get the pretrained model with max-seq-length over 512?
#7
yhy-2000
opened
2 years ago
4
FLOPs
#6
Cydia2018
opened
2 years ago
2
refactoring
#5
kssteven418
closed
2 years ago
0
Refactoring
#4
kssteven418
closed
2 years ago
0
will token number becom larger when fix threshold (hard training step)?
#3
DreamsofGg
opened
3 years ago
0
Initial commit for LTP implementation
#2
kssteven418
closed
3 years ago
0
LTP basic implementation
#1
kssteven418
closed
3 years ago
0