Xilinx / Vitis-AI

Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
https://www.xilinx.com/ai
Apache License 2.0
1.45k stars 627 forks source link

[UNILOG][FATAL][XCOM_ACGEN_ERROR][instuction generating fail, please contact us.] #740

Closed mlopezpalma closed 2 years ago

mlopezpalma commented 2 years ago

I am newer in vitis-ai , I am using a ./docker_run.sh xilinx/vitis-ai-gpu:latest in the tutorial example 08-tf2_flow

the train is working ok , the quantize is working ok also

But in the complilation source compile.sh zcu102

compile() { vai_c_tensorflow2 \ --model build/quant_model/q_model.h5 \ --arch $ARCH \ --outputdir build/compiled$TARGET \ --net_name customcnn }

Give the next error

(vitis-ai-tensorflow2) Vitis-AI /workspace > source compile.sh zcu102

COMPILING MODEL FOR ZCU102..

[INFO] Namespace(batchsize=1, inputs_shape=None, layout='NHWC', model_files=['build/quant_model/q_model.h5'], model_type='tensorflow2', named_inputs_shape=None, out_filename='/tmp/customcnn_org.xmodel', proto=None) [INFO] tensorflow2 model: /workspace/build/quant_model/q_model.h5 [INFO] keras version: 2.6.0 [INFO] Tensorflow Keras model type: functional [INFO] parse raw model :100%|██████████| 71/71 [00:00<00:00, 21341.23it/s]
[INFO] infer shape (NHWC) :100%|██████████| 108/108 [00:00<00:00, 5136.93it/s]
[INFO] perform level-0 opt :100%|██████████| 2/2 [00:00<00:00, 137.53it/s]
[INFO] perform level-1 opt :100%|██████████| 2/2 [00:00<00:00, 542.36it/s]
[INFO] generate xmodel :100%|██████████| 108/108 [00:00<00:00, 4238.97it/s]
[INFO] dump xmodel: /tmp/customcnn_org.xmodel [UNILOG][INFO] Compile mode: dpu [UNILOG][INFO] Debug mode: function [UNILOG][INFO] Target architecture: DPUCZDX8G_ISA0_B4096_MAX_BG2 [UNILOG][INFO] Graph name: model, with op num: 232 [UNILOG][INFO] Begin to compile... [UNILOG][FATAL][XCOM_ACGEN_ERROR][instuction generating fail, please contact us.] Check failure stack trace: Aborted (core dumped)


Similar problem happens if I use the cpu docker instead of the dpu . I also test with ./docker_run.sh xilinx/vitis-ai:1.3.411 ( follow some post but other errors appears )

I appreciate any help for this issue

thanks

mlopezpalma commented 2 years ago

Additional comment if the model is poorly trainned with 2 epochs accuracy 0.5 the error do not appear ; but if the model is extensively trained with 250 epocs accuracy 0.95 the error appear .

wanghong4compiler commented 2 years ago

Hi @mlopezpalma , would you mind upgrade the toolchain to VAI2.0? I test it on VAI2.0, found it compiled successfully.

AfifaIshtiaq commented 2 years ago

Hi @mlopezpalma , would you mind upgrade the toolchain to VAI2.0? I test it on VAI2.0, found it compiled successfully.

Hi I have the same issue and I'm using VAI2.0. Here is the quantized [Uploading quantized_dilationrate_fullytrained.zip…]() model deep_rx_dilation_rate_fully_trained.zip

wanghong4compiler commented 2 years ago

Hi @AfifaIshtiaq , Is this one same with issue #854 ?

AfifaIshtiaq commented 2 years ago

Hi @AfifaIshtiaq , Is this one same with issue #854 ?

Yes

qianglin-xlnx commented 2 years ago

Same issue with #854