PaddlePaddle / Paddle-Lite

PaddlePaddle High Performance Deep Learning Inference Engine for Mobile and Edge (飞桨高性能深度学习端侧推理引擎)
https://www.paddlepaddle.org.cn/lite
Apache License 2.0
6.89k stars 1.6k forks source link

A308板端推理ssd检测模型的demo,这样的推理速度正常么?大家的推理速度都是多少? #9691

Closed Genlk closed 5 months ago

Genlk commented 1 year ago

cpu 模式下,450ms image

timvx模式下,1426ms, image

paddle-bot[bot] commented 1 year ago

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网文档常见问题历史Issue来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQ and Github Issue to get the answer.Have a nice day!

hong19860320 commented 1 year ago

cpu 模式下,450ms image

timvx模式下,1426ms, image

A308 是啥板子呢? 模型是不是fp32的?你试试ssd int8 per layer全量化模型跑下哈

Genlk commented 1 year ago

你好,大佬,说错了,是C308的板子。用了ssd 8 bit的模型,推理时间在12ms,就快了很多。应该是之前模型用的fp32的。

另外请问,A311D的板子可以在Android中部署tim-vx程序嘛? [https://github.com/PaddlePaddle/Paddle-Lite/issues/9741] @hong19860320