megvii-research / FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Apache License 2.0
301 stars 48 forks source link

Quantized model with FQ-ViT #10

Closed uniqzheng closed 2 years ago

uniqzheng commented 2 years ago

Hi! Thanks for your great work! Would you please provide the saved quantized model (ViT-base-patch16-384) with FQ-ViT?

linyang-zhh commented 2 years ago

Hi, @uniqzheng
Sorry for that we have not provided quantized models and the interface for storing models. However, you can refer to #1 and https://github.com/megvii-research/FQ-ViT/issues/15#issuecomment-1143544185, and just save the quantized models.

If you get other problems during above steps, please feel free to reopen this issue.