I have quantized *.jit model (quantization has been performed in torch). But rknn-toolkit can not load it :(
rknn-toolkit_v1.7.3 + torch_v1.9.0
...
rknn.config(quantize_input_node=True,
mean_values=mean,
std_values=std,
quantized_dtype='dynamic_fixed_point-i8',
target_platform='rv1126',
batch_size=100)
print('--> Loading model')
ret = rknn.load_pytorch(model=jit_model_file, input_size_list=input_size)
if ret != 0:
print('Load Pytorch JIT model failed!')
exit(ret)
...
console output:
I Start importing pytorch...
/home/sasha/.face_chain_cache/model_files/feature_extractor/masked/at_0015.jit ********************
D import clients finished
W Pt model version is 1.6(same as you can check through <netron>), but the installed pytorch is 1.9.0+cu102. This may cause the model to fail to load.
E Catch exception when loading pytorch model: /home/sasha/.face_chain_cache/model_files/feature_extractor/masked/at_0015.jit!
E Traceback (most recent call last):
E File "rknn/api/rknn_base.py", line 399, in rknn.api.rknn_base.RKNNBase.load_pytorch
E File "rknn/base/RKNNlib/RK_nn.py", line 161, in rknn.base.RKNNlib.RK_nn.RKnn.load_pytorch
E File "rknn/base/RKNNlib/app/importer/import_pytorch.py", line 129, in rknn.base.RKNNlib.app.importer.import_pytorch.ImportPytorch.run
E File "rknn/base/RKNNlib/converter/convert_pytorch_new.py", line 5120, in rknn.base.RKNNlib.converter.convert_pytorch_new.convert_pytorch.load
E File "rknn/base/RKNNlib/converter/convert_pytorch_new.py", line 4902, in rknn.base.RKNNlib.converter.convert_pytorch_new.PyTorchOpConverter.report_missing_conversion
E NotImplementedError: The following operators are not implemented: ['quantized::batch_norm']
E Please feedback the detailed log file <conversion.log> to the RKNN Toolkit development team.
E You can also check github issues: https://github.com/rockchip-linux/rknn-toolkit/issues
Load Pytorch JIT model failed!
So, I see that problem in not implemented: ['quantized::batch_norm']. But what is workaround?
I have quantized *.jit model (quantization has been performed in torch). But rknn-toolkit can not load it :(
rknn-toolkit_v1.7.3 + torch_v1.9.0
console output:
So, I see that problem in not implemented: ['quantized::batch_norm']. But what is workaround?