issues
search
mit-han-lab
/
hardware-aware-transformers
[ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
https://hat.mit.edu
Other
329
stars
50
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Question about the SubTransformers sampling process.
#17
Kevinpsk
opened
1 year ago
0
Used version of `fairseq`
#16
macsz
opened
2 years ago
0
Question about the latency on Raspberry Pi
#15
qaz12994
closed
2 years ago
1
lower loss but the BLEU is 0
#14
leo038
opened
2 years ago
0
Does the generated latency count in the embedding lookup table and the last output layers ?
#13
leo038
closed
2 years ago
2
What is the method used to sample training examples for the MLP latency predictor?
#12
Mo-Abdelgawad
closed
2 years ago
2
Latency predictor relative error instead of absolute error
#11
Mo-Abdelgawad
closed
3 years ago
1
Training new SuperTransformer - calculating number of SubTransformer combinations?
#10
ihish52
closed
3 years ago
2
how to use the processed data in your code?
#9
pengfeiZhao1993
closed
3 years ago
0
About the Quantization Friendly.
#8
wangclnlp
closed
3 years ago
1
Question on how to evaluate inherited SubTransformers.
#7
ihish52
closed
3 years ago
11
Error in step 2.3 (Evolutionary search with latency constraint)
#6
ihish52
closed
3 years ago
5
question about number of parameters
#5
huchinlp
closed
4 years ago
2
questions about the search & training process
#4
huchinlp
closed
4 years ago
1
Quantization on HAT.
#3
sugeeth14
closed
4 years ago
4
RAM in the used Raspberry Pi
#2
Mo-Abdelgawad
closed
4 years ago
2
One question
#1
zwjyyc
closed
4 years ago
2