Closed zackdilan closed 2 years ago
Hi @zackdilan, The user guide of Vitis AI 1.3 includes some basic information about how to do the PTQ(quantize calibration) and QAT(quantize finetuning) with the tf2 quantizer. 1&2. We support both PTQ and QAT, and in step 2 you are doing PTQ.
with keras.utils.CustomObjectScope({'GroupNormalization': GroupNormalization}):
quantized_model = quantizer.quantize_model(calib_dataset=eval_dataset)
In Vitis AI 1.3 the quantization support for custom layers is experimental, you can do it by register the custom layer in the quantize strategy file (https://github.com/Xilinx/Vitis-AI/blob/master/tools/Vitis-AI-Quantizer/vai_q_tensorflow2.x/tensorflow_model_optimization/python/core/quantization/keras/vitis/vitis_8bit_default_quantize_strategy.json). Please note that this is just for quantization experiments and the compiler cannot handle the custom layers now.
We are improving the interface and ease of use of custom APIs and provide examples and documentation for them. They are on the schedule of the following releases.
Hi, @sheng-xiao Thanks a lot for your detailed explanation.
As far as I have obtained,
3.Finally when I try to compile the model : Expected: layers in the support list are mapped into DPU and other layers mapped to CPU. What I got :
Hi @sheng-xiao , sorry for bothering you again. I would like to have a clear understanding of the unsupported operators.
The Vitis ai compiler only assigns operations to the CPU only if the configuration of the DPU operator exceeds the limitation. FOr example CONV2D operation - dpu supports and its kernel dimension limitation is up to w, h: [1, 16]
.
So if I have a conv2d layer of 17*17, this will be assigned to CPU right?
And consider for a conv2d layer, I use tanh as activation, will it assign to CPU automatically?
I did a small prrof check for tanh activation, but I have realized :
And I think the Vitis ai compiler (for now) cannot automatically assign an operation to the CPU for example the Tanh
operand(which is not DPU supported). You can find more details in the compile log
The GroupNormalization op is not support. This issue is existed for a long time , I will close it ,if you have problem, you can reopen it. Thank you @zackdilan
System information
TensorFlow version (you are using): TF 2.3.0
Are you willing to contribute it (Yes/No): Yes
Motivation
Describe the feature: Group Normalization divides the channels into groups and computes within each group the mean and variance for normalization.
Describe how the feature helps achieve the use case. Empirically, its accuracy is more stable than the batch norm in a wide range of small-batch sizes, if the learning rate is adjusted linearly with batch sizes.
Describe how existing APIs don't satisfy your use case I am attaching the procedure and TF model and error screenshots:
Misc.
For people who want to contact me for collaboration: robin.chacko@plc2.de