Closed metanav closed 2 years ago
Hi, @metanav
'[relu6_opos >= 4]' is the constraint for the DPU IP in Vitis-AI 1.2 version, make sure you use real images for quantization and the accuracy is acceptable, thus the quantization position value of ReLU6 will meet the requirements of DPU IP.
And you could also try using Vitis-AI 1.3 docker with a new compilation flow using XIR, which I think will has no such constraints at all.
Thanks very much.
@Mookel Thanks for your reply. I am using the same training images for calibration. If I use Vitis-AI 1.3 docker, would it compile for Ultra96V2 target? I had an issue with Vitis-AI 1.3 docker previously: https://github.com/Xilinx/Vitis-AI/issues/233
Hi, @qianglin-xlnx Can Vitis-AI 1.3 support Ultra96V2?
Hi, @qianglin-xlnx Can Vitis-AI 1.3 support Ultra96V2?
Yes.
Then what is the target name? I am getting the error below:
[INFO] Namespace(inputs_shape=None, layout='NHWC', model_files=['quantized_model.h5'], model_type='tensorflow2', out_filename='output/test_model_org.xmodel', proto=None)
[INFO] tensorflow2 model: ../notebooks/experiments/069/quantized_model.h5
/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/translator/tensorflow_translator.py:1752: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
value = param.get(group).get(ds).value
[INFO] parse raw model :100%|██████████████████████████████████████████████████| 135/135 [00:00<00:00, 24550.43it/s]
[INFO] infer shape (NHWC) :100%|██████████████████████████████████████████████████| 222/222 [00:00<00:00, 12077.77it/s]
[OPT] No optimization method available for xir-level optimization.
[INFO] generate xmodel :100%|██████████████████████████████████████████████████| 222/222 [00:00<00:00, 4300.90it/s]
[INFO] generate xmodel: /workspace/ouput/test_model_org.xmodel
[UNILOG][INFO] The compiler log will be dumped at "/tmp/vitis-ai-user/log/xcompiler-20201222-070210-21349"
[UNILOG][INFO] Target architecture: DPUCZDX8G
[UNILOG][FATAL][TARGET_FACTORY_UNREGISTERED_TARGET][Unregistered target!] Cannot find target with name DPUCZDX8G, valid names are: {DPUCAHX8H_ISA2=>0x20200000000002a,DPUCAHX8H_ISA2_ELP2=>0x20200000000002e,DPUCAHX8L_ISA0=>0x30000000000001d,DPUCVDX8G_ISA0_B16384C64B1=>0x600000076080812,DPUCVDX8G_ISA0_B8192C32B1=>0x600000076080811,DPUCVDX8G_ISA0_B8192C32B1_ELP4=>0x600000076040411,DPUCVDX8G_ISA0_B8192C32B3=>0x600000076080831,DPUCVDX8G_ISA0_B8192C32B3_DW=>0x6000000f6088831,DPUCVDX8G_ISA0_B8192C32B3_I4W8B2=>0x600000276080831,DPUCVDX8G_ISA0_B8192C32B3_I8W4B2=>0x600000376080831,DPUCVDX8G_ISA0_B8192C32B3_I8W8B2=>0x600000176080831,DPUCVDX8H_ISA0=>0x5000000000007ee,DPUCZDI4G_ISA0_B4096_DEMO_SSD=>0x400002003220206,DPUCZDI4G_ISA0_B8192D8_DEMO_SSD=>0x400002003220207,DPUCZDX8G_ISA0_B1024_MAX=>0x1000020f7014402,DPUCZDX8G_ISA0_B1024_MIN=>0x100002022010102,DPUCZDX8G_ISA0_B1152_MAX=>0x1000020f7012203,DPUCZDX8G_ISA0_B1152_MIN=>0x100002022010103,DPUCZDX8G_ISA0_B1600_MAX=>0x1000020f7014404,DPUCZDX8G_ISA0_B1600_MIN=>0x100002022010104,DPUCZDX8G_ISA0_B2304_MAX=>0x1000020f7014405,DPUCZDX8G_ISA0_B2304_MAX_BG2=>0x1000020f6014405,DPUCZDX8G_ISA0_B2304_MIN=>0x100002022010105,DPUCZDX8G_ISA0_B3136_MAX=>0x1000020f7014406,DPUCZDX8G_ISA0_B3136_MAX_BG2=>0x1000020f6014406,DPUCZDX8G_ISA0_B3136_MIN=>0x100002022010106,DPUCZDX8G_ISA0_B4096_MAX=>0x1000020f7014407,DPUCZDX8G_ISA0_B4096_MAX_BG2=>0x1000020f6014407,DPUCZDX8G_ISA0_B4096_MAX_EM=>0x1000030f7014407,DPUCZDX8G_ISA0_B4096_MIN=>0x100002022010107,DPUCZDX8G_ISA0_B512_MAX=>0x1000020f7012200,DPUCZDX8G_ISA0_B512_MIN=>0x100002022010100,DPUCZDX8G_ISA0_B800_MAX=>0x1000020f7012201,DPUCZDX8G_ISA0_B800_MIN=>0x100002022010101}
*** Check failure stack trace: ***
This program has crashed!
Aborted (core dumped)
@metanav It depends on the DPU you integrated on Ultra96V2. For Ultra96V2, you may integrate B1152F, B1600F or B2304F DPU. So the target name may be one of the following:
Hi @metanav Since we haven't received your reply for a long time, we assume you have solved this issue and I'm going to close it. If you still have any questions, please feel free to reopen it. Thank you very much.
I am getting error on compiling a quantized tf.keras model:
[VAI_C][Fatal] Check failed for condition [relu6_opos >= 4] in [/home/xbuild/conda-bld/dnnc_1592904456005/work/dnnc_impl/codegen/dpu_operator.cc:117] :output quantization postion value of ReLU6 must be larger than 4 after quantization, but current operator [efficientnet_lite0_model_blocks_11_Relu6_0_Relu6] has quantization postion value [3].
I am using Vitis-AI 1.2 docker. There were no error or warnings while quantizing.