Xilinx / Vitis-AI

Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
https://www.xilinx.com/ai
Apache License 2.0
1.49k stars 633 forks source link

Low Inference Accuracy in DPUCADF8H #736

Open HasinduKariyawasam opened 2 years ago

HasinduKariyawasam commented 2 years ago

I tried out the MNIST classification example using PyTorch in an AWS F1 instance. The quantization accuracy of the network is very high (around 98%). However, when inference in the FPGA, the accuracy drops significantly to around 33%. Is this due to overflow? Is there a way to mitigate this problem? Or is this due to an issue with the DPU.

HasinduKariyawasam commented 2 years ago

I managed to solve this issue. The issue was with the batch size. In the example when creating a runner in line 125 of the app_mt.py script vart.Runner.create_runner(subgraphs[0], "run"), it shows the message "Xmodel compiled with batchSize: 1". However, during the execution, the app_mt.py script feeds 4 images at a time for the DPU (When I checked the intermediate outputs, feeding more than 1 image seems to have messed up the computations). The reason for this is that in line 71 in app_mt.py, the value returned as the batch size is 4 (I am not sure why this happens). What I did is edit that line as, batchSize = 1. Then it worked well and now the model deployed in the FPGA also gives an accuracy of 98.91%.

cl600class commented 2 years ago

@HasinduKariyawasam I would like to know which version of Vitis-AI is used.

HasinduKariyawasam commented 2 years ago

Hi, I am using Vitis AI 1.4 for inferencing. However, for compilation, I use Vitis AI 2.0 because I am using PyTorch in AWS F1 instance.