PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
22.24k stars 5.58k forks source link

Picodet INT8 is slower than FP32 when inference with MKLDNN #44075

Closed yeliang2258 closed 2 years ago

yeliang2258 commented 2 years ago

bug描述 Describe the Bug

Picodet INT8 is slower than FP32 when inference with MKLDNN.

CPU:Intel(R) Xeon(R) Gold 6271C CPU @ 2.60GHz thread num:8 FP32:3.09 s INT8:3.13 s

My model and script : picobug.tar.gz

其他补充信息 Additional Supplementary Information

No response

paddle-bot-old[bot] commented 2 years ago

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

wozna commented 2 years ago

Hi @yeliang2258 I working on improving performance for INT8 model. But as I mentioned before it is very difficult case where we have very small filters in convolution that is why avx512_vnni int8 will not give as so much speed up. So the performance is worse because this is an INT8 conversion overhead. We still have few ideas to implement as:

yeliang2258 commented 2 years ago

Hi @wozna, In recent tests, it was found that the accuracy of this model is almost 0. Although the speed may not be improved, the accuracy problem still needs to be solved. The test script is here: https://github.com/PaddlePaddle/PaddleTest/tree/develop/inference/python_api_test/test_int8_model First run:

sh prepare.sh

Then:

python test_ppyoloe_infer.py --model_path=models/picodet_s_416_coco_npu_quant --reader_config=configs/picodet_reader.yml --precision=int8
wozna commented 2 years ago

@yeliang2258 this accuracy bug is related to new this new quantization method with quantize_linear and dequantize_linear isn't it?

yeliang2258 commented 2 years ago

@wozna No, the accuracy of the quantized model in the old format is also not correct.

wozna commented 2 years ago

This PR should fix this issue https://github.com/PaddlePaddle/Paddle/pull/46378. The problem was that even if we have uint8 output we used int8 data type which was associated with a loss of accuracy.