I was trying to run the Yolov10 model on Ryzen. I quantized the ONNX model using the VitisAI ONNX Quantizer. I used the following configuration as per the VitisAI ONNX Quantizer documentation.
To my surprise, only 2% of the operators ran on the DPU. Upon checking the Vitisai_ep_report.json file, I found that the Conv was running on the CPU instead of the DPU.
Models
You can find the quantized models and other artifacts here: Yolov10s model
Note: I wanted to check only if the model runs on Ryzen and not worry about accuracy, hence the model is not quantized with accuracy in mind.
Expected Behavior
The Conv operator should run on the DPU for better performance.
Any help or insights into resolving this issue would be greatly appreciated.
Description
I was trying to run the Yolov10 model on Ryzen. I quantized the ONNX model using the VitisAI ONNX Quantizer. I used the following configuration as per the VitisAI ONNX Quantizer documentation.
To my surprise, only 2% of the operators ran on the DPU. Upon checking the Vitisai_ep_report.json file, I found that the Conv was running on the CPU instead of the DPU.
Models
You can find the quantized models and other artifacts here: Yolov10s model
Note: I wanted to check only if the model runs on Ryzen and not worry about accuracy, hence the model is not quantized with accuracy in mind.
Expected Behavior
Any help or insights into resolving this issue would be greatly appreciated.
@uday610