Open HasinduKariyawasam opened 2 years ago
I managed to solve this issue. The issue was with the batch size. In the example when creating a runner in line 125 of the app_mt.py
script vart.Runner.create_runner(subgraphs[0], "run")
, it shows the message "Xmodel compiled with batchSize: 1". However, during the execution, the app_mt.py
script feeds 4 images at a time for the DPU (When I checked the intermediate outputs, feeding more than 1 image seems to have messed up the computations). The reason for this is that in line 71 in app_mt.py
, the value returned as the batch size is 4 (I am not sure why this happens). What I did is edit that line as,
batchSize = 1
.
Then it worked well and now the model deployed in the FPGA also gives an accuracy of 98.91%.
@HasinduKariyawasam I would like to know which version of Vitis-AI is used.
Hi, I am using Vitis AI 1.4 for inferencing. However, for compilation, I use Vitis AI 2.0 because I am using PyTorch in AWS F1 instance.
I tried out the MNIST classification example using PyTorch in an AWS F1 instance. The quantization accuracy of the network is very high (around 98%). However, when inference in the FPGA, the accuracy drops significantly to around 33%. Is this due to overflow? Is there a way to mitigate this problem? Or is this due to an issue with the DPU.