Xilinx / Vitis-AI

Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
https://www.xilinx.com/ai
Apache License 2.0
1.41k stars 621 forks source link

Difference between TVM compile and directly use vitis-AI compile deeplearning model #575

Open jiangjiajun opened 2 years ago

jiangjiajun commented 2 years ago

Hi, I'm trying to deploy PaddlePaddle model by Vitis-AI, but found that ONNX format is not supported, instead there's a document shows how to compile model by TVM.

https://github.com/Xilinx/Vitis-AI/blob/master/external/tvm/docs/compiling_a_model.md

Since TVM has PaddlePaddle frontend, I think this may solve my problem. But there are few questions I want to know

thanks in advance!

Ansor-ZJJ commented 5 months ago

Hello! Do you understand the difference between compiling with TVM+vtisiai and compiling directly with vitisai? Does the performance of the final model produced by the two methods differ greatly?