Xilinx / Vitis-AI

Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
https://www.xilinx.com/ai
Apache License 2.0
1.47k stars 630 forks source link

Quantized and compiled ofa_yolo does not output equal predictions #1179

Closed dextroza closed 1 year ago

dextroza commented 1 year ago

Hi, we are experimenting with ofa_yolo model, provided by Xilinx: pt_OFA-yolo_coco_640_640_0.5_24.62G_2.5

We want to reproduce the same predictions both on host PC and ZCU104. We managed to get predictions for one image of quantized and predictions of compiled model, but they are not equal. We checked preprocessing and it seems equal. We also checked postprocessing code, but we are not sure if they are equal.

Postprocessing code for hw: ofa_yolo_postprocess

Environment: Vitis AI docker 2.5 1xDPU B4096, petalinux 22.1, ZCU104

Could you help us with that, please? Should models from model zoo have equal predictions for quantized model on PC and compiled model on embedded device? Do you have any pytorch yolo examples where python and c++ postprocessing are equal?

Regards dextroza

lishixlnx commented 1 year ago
  1. do you have same postprocess logic on both pc side and zcu104 side? You only provided 1 url for the zcu104 postprocess, and I don't know your code on pc side.

  2. quantized model and compiled model normally have similar but not exact same prediction result.

  3. the xnnpp lib is only in c++ format.

dextroza commented 1 year ago

Hi, we already solved this issue long time ago, but thanks! I will close this issue.