Xilinx / Vitis-AI

Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
https://www.xilinx.com/ai
Apache License 2.0
1.44k stars 624 forks source link

elew0's quantization info's error #1093

Closed damor-rbz closed 1 year ago

damor-rbz commented 1 year ago

Dear,

We have a yolov3 tiny model that can run on the DPU. Quantization is ok but when compiling we get the following error:

[UNILOG][FATAL][XCOM_UNSUPPORT_QUANTIZATION][The fix info is error or unsupported.] elew0's quantization info's variation is too large.

We are using Vitis-AI docker version 2.0 We are able to reproduce the error using version 2.5.

We tried quantization using random data and real images, in both case we got the same result. If we use a vector with all 0zeros as input data then the model compiles.

If we do post training quantization we are able to get a model that can be compiled, but only if we first do some model passes with a dataset with near cero values, then the training with real data. This has been tested on version 2.0.

Thanks,

ChaoLi-AMD commented 1 year ago

If we do post training quantization we are able to get a model that can be compiled, but only if we first do some model passes with a dataset with near cero values, then the training with real data. This has been tested on version 2.0.

The description here is not very clear, could you please try it on Vitis-AI docker version 3.0 to see if the problem still exists?

ChaoLi-AMD commented 1 year ago

This issue has been inactive for 4 months, so we will close it for now. If you have any further concerns or need assistance, please feel free to reopen it at any time.