megvii-research / FQ-ViT

[IJCAI 2022] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer
Apache License 2.0
301 stars 48 forks source link

最后没有精度的原因可能有哪些? #26

Closed roncedupon closed 1 year ago

roncedupon commented 1 year ago

按照论文里,从ImageNet中随机选择1000张训练图片作为校准数据集,但是为什么最后的结果里没有精度呢?

Test: [104/115] Time 0.142 (0.400) Loss 9.3196 (9.1861) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000) Test: [105/115] Time 0.448 (0.400) Loss 9.2776 (9.1869) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000) Test: [106/115] Time 0.355 (0.400) Loss 8.9878 (9.1851) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000) Test: [107/115] Time 0.148 (0.397) Loss 9.2649 (9.1858) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000) Test: [108/115] Time 0.449 (0.398) Loss 9.0594 (9.1846) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000) Test: [109/115] Time 0.145 (0.396) Loss 8.9516 (9.1825) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000) Test: [110/115] Time 0.448 (0.396) Loss 9.0349 (9.1812) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000) Test: [111/115] Time 0.353 (0.396) Loss 9.1589 (9.1810) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000) Test: [112/115] Time 0.352 (0.395) Loss 9.4418 (9.1833) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000) Test: [113/115] Time 0.283 (0.394) Loss 9.4959 (9.1860) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000) Test: [114/115] Time 0.322 (0.394) Loss 9.3794 (9.1872) Prec@1 0.000 (0.000) Prec@5 0.000 (0.000)

PeiqinSun commented 1 year ago

please check the acc1 of float model first.