megvii-research / FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Apache License 2.0
301 stars 48 forks source link

tensorrt #29

Closed shuyuan-wang closed 1 year ago

shuyuan-wang commented 1 year ago

swin transformer量化后在tensorrt上latency怎么样,和fp32的比?

linyang-zhh commented 1 year ago

Sorry. We didn't carry out and release those experiments about hardware and latency. You can find more information about the latency on tensorRT or other platforms in Sparsebit and MMDeploy.