I have been trying to quantized channel pruned efficientnetv2 using vitis ai pytorch quantizer. But i have been facing these issues
[VAIQ_NOTE]: =>Successfully convert 'EfficientNet' to xmodel.(quantize_result/EfficientNet_int.xmodel)
/opt/vitis_ai/conda/envs/vitis-ai-wego-torch/lib/python3.7/site-packages/pytorch_nndct/nn/modules/prim_ops.py:116: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if not (list(self.node.out_tensors[0].shape[1:]) == list(input.size())[1:]):
/opt/vitis_ai/conda/envs/vitis-ai-wego-torch/lib/python3.7/site-packages/pytorch_nndct/quantization/torchquantizer.py:53: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if inf.sum() > 0 or nan.sum() > 0:
/opt/vitis_ai/conda/envs/vitis-ai-wego-torch/lib/python3.7/site-packages/pytorch_nndct/nn/modules/fix_ops.py:68: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
tensor.storage().size() != tensor.numel()):
/opt/vitis_ai/conda/envs/vitis-ai-wego-torch/lib/python3.7/site-packages/pytorch_nndct/nn/modules/adaptive_avg_pool.py:42: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
input_size = [int(dim) for dim in input.shape[2:]]
The model do get quantized ,the onnx quantized version is working fine but when we deploy xmodel to fpga by compiling, we are not getting the accurate outputs and the model is classifying all classes to same label.
As for the last TracerWarning, changing from an AdaptiveAvgPool2d to AvgPool2d with a fixed kernel size (if that is ok for your use case) makes the warning go away.
I have been trying to quantized channel pruned efficientnetv2 using vitis ai pytorch quantizer. But i have been facing these issues
[VAIQ_NOTE]: =>Successfully convert 'EfficientNet' to xmodel.(quantize_result/EfficientNet_int.xmodel) /opt/vitis_ai/conda/envs/vitis-ai-wego-torch/lib/python3.7/site-packages/pytorch_nndct/nn/modules/prim_ops.py:116: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if not (list(self.node.out_tensors[0].shape[1:]) == list(input.size())[1:]): /opt/vitis_ai/conda/envs/vitis-ai-wego-torch/lib/python3.7/site-packages/pytorch_nndct/quantization/torchquantizer.py:53: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if inf.sum() > 0 or nan.sum() > 0: /opt/vitis_ai/conda/envs/vitis-ai-wego-torch/lib/python3.7/site-packages/pytorch_nndct/nn/modules/fix_ops.py:68: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! tensor.storage().size() != tensor.numel()): /opt/vitis_ai/conda/envs/vitis-ai-wego-torch/lib/python3.7/site-packages/pytorch_nndct/nn/modules/adaptive_avg_pool.py:42: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! input_size = [int(dim) for dim in input.shape[2:]]
The model do get quantized ,the onnx quantized version is working fine but when we deploy xmodel to fpga by compiling, we are not getting the accurate outputs and the model is classifying all classes to same label.
github to quantization: https://github.com/amitpant7/Quantizing-Efficientnetv2-using-Vitis-AI-Pytorch/tree/compiled/efficientnet