Hi, I'm trying to deploy PaddlePaddle model by Vitis-AI, but found that ONNX format is not supported, instead there's a document shows how to compile model by TVM.
Hello! Do you understand the difference between compiling with TVM+vtisiai and compiling directly with vitisai? Does the performance of the final model produced by the two methods differ greatly?
Hi, I'm trying to deploy PaddlePaddle model by Vitis-AI, but found that ONNX format is not supported, instead there's a document shows how to compile model by TVM.
https://github.com/Xilinx/Vitis-AI/blob/master/external/tvm/docs/compiling_a_model.md
Since TVM has PaddlePaddle frontend, I think this may solve my problem. But there are few questions I want to know
thanks in advance!