W rknn-toolkit version: 1.7.5
D Using CPPUTILS: True
I Generate input meta ...
D import clients finished
I Load net...
I Load data...
I Load input meta
I Load quantization tensor table
I Start quantization...
D import clients finished
D iterations: 1, batch_size: 100
I Quantization start...
D Optimizing network with qnt_insert_converter_layer
E Catch exception when direct build RKNN model!
E Traceback (most recent call last):
E File "rknn/base/RKNNlib/app/medusa/quantization.py", line 51, in rknn.base.RKNNlib.app.medusa.quantization.Quantization._run_quantization
E File "rknn/base/RKNNlib/app/medusa/quantization.py", line 92, in rknn.base.RKNNlib.app.medusa.quantization.Quantization._quantize_net
E File "rknn/base/RKNNlib/app/medusa/quantization.py", line 130, in rknn.base.RKNNlib.app.medusa.quantization.Quantization._generate_hybrid_table
E File "rknn/base/RKNNlib/optimize/optimizer.py", line 437, in rknn.base.RKNNlib.optimize.optimizer.Optimizer.apply
E File "rknn/base/RKNNlib/optimize/rules/quantize/hybrid_insert_converter_layer.py", line 223, in rknn.base.RKNNlib.optimize.rules.quantize.hybrid_insert_converter_layer.HybridInsertConverterLayer.apply
E AttributeError: 'NoneType' object has no attribute 'is_op'
E Please feedback the detailed log file <log_feedback_to_the_rknn_toolkit_dev_team.log> to the RKNN Toolkit development team.
E You can also check github issues: https://github.com/rockchip-linux/rknn-toolkit/issues
转化LightTrack backbone的onnx转化成rknn模型,精度量化分析可以跑,但是现在想做混合量化。第一步没有问题,但是第二步报错,自己无法解决。附上onnx模型。 lighttrack_backbone.zip