mit-han-lab / lite-transformer

[ICLR 2020] Lite Transformer with Long-Short Range Attention
https://arxiv.org/abs/2004.11886
Other
596 stars 81 forks source link

Please share your quantization, quantization+pruning checkpoints #26

Closed kishorepv closed 2 months ago

kishorepv commented 3 years ago

Hi,

Can you please share the trained checkpoints for the quantized and quantized+pruned models (shown in this plot - https://github.com/mit-han-lab/lite-transformer#further-compress-transformer-by-182x)?

I am interested in testing it for the translation and the summarization tasks. I would appreciate it if you can share those checkpoints.

Thank you

kishorepv commented 3 years ago

@Michaelvll

zhijian-liu commented 2 months ago

Thank you for your interest in our project. Unfortunately, this repository is no longer actively maintained, so we will be closing this issue. If you have any further questions, please feel free to email us. Thank you again!