xuexingyu24 / License_Plate_Detection_Pytorch

A two stage lightweight and high performance license plate recognition in MTCNN and LPRNet
Other
641 stars 171 forks source link

I have successfully converted MTCNN and STN to onnx, but I have dimensional problems when I convert LPRNet to onnx #29

Open JF-Lee opened 4 years ago

JF-Lee commented 4 years ago

graph(%input.1 : Float(1, 3, 24, 94), %backbone.0.weight : Float(64, 3, 3, 3), %backbone.0.bias : Float(64), %backbone.1.weight : Float(64), %backbone.1.bias : Float(64), %backbone.1.running_mean : Float(64), %backbone.1.running_var : Float(64), %backbone.4.block.0.weight : Float(32, 64, 1, 1), %backbone.4.block.0.bias : Float(32), %backbone.4.block.2.weight : Float(32, 32, 3, 1), %backbone.4.block.2.bias : Float(32), %backbone.4.block.4.weight : Float(32, 32, 1, 3), %backbone.4.block.4.bias : Float(32), %backbone.4.block.6.weight : Float(128, 32, 1, 1), %backbone.4.block.6.bias : Float(128), %backbone.5.weight : Float(128), %backbone.5.bias : Float(128), %backbone.5.running_mean : Float(128), %backbone.5.running_var : Float(128), %backbone.8.block.0.weight : Float(64, 64, 1, 1), %backbone.8.block.0.bias : Float(64), %backbone.8.block.2.weight : Float(64, 64, 3, 1), %backbone.8.block.2.bias : Float(64), %backbone.8.block.4.weight : Float(64, 64, 1, 3), %backbone.8.block.4.bias : Float(64), %backbone.8.block.6.weight : Float(256, 64, 1, 1), %backbone.8.block.6.bias : Float(256), %backbone.9.weight : Float(256), %backbone.9.bias : Float(256), %backbone.9.running_mean : Float(256), %backbone.9.running_var : Float(256), %backbone.11.block.0.weight : Float(64, 256, 1, 1), %backbone.11.block.0.bias : Float(64), %backbone.11.block.2.weight : Float(64, 64, 3, 1), %backbone.11.block.2.bias : Float(64), %backbone.11.block.4.weight : Float(64, 64, 1, 3), %backbone.11.block.4.bias : Float(64), %backbone.11.block.6.weight : Float(256, 64, 1, 1), %backbone.11.block.6.bias : Float(256), %backbone.12.weight : Float(256), %backbone.12.bias : Float(256), %backbone.12.running_mean : Float(256), %backbone.12.running_var : Float(256), %backbone.16.weight : Float(256, 64, 1, 4), %backbone.16.bias : Float(256), %backbone.17.weight : Float(256), %backbone.17.bias : Float(256), %backbone.17.running_mean : Float(256), %backbone.17.running_var : Float(256), %backbone.20.weight : Float(68, 256, 13, 1), %backbone.20.bias : Float(68), %backbone.21.weight : Float(68), %backbone.21.bias : Float(68), %backbone.21.running_mean : Float(68), %backbone.21.running_var : Float(68), %container.0.weight : Float(68, 516, 1, 1), %container.0.bias : Float(68)):

Original python traceback for operator 14 in network torch-jit-export_predict in exception above (most recent call last): Traceback (most recent call last): File "to_onnx_lpr.py", line 32, in outputs = rep.run(np.random.randn(1, 3, 24, 94).astype(np.float32)) File "/usr/local/lib/python3.6/dist-packages/caffe2/python/onnx/backend_rep.py", line 57, in run self.workspace.RunNet(self.predict_net.name) File "/usr/local/lib/python3.6/dist-packages/caffe2/python/onnx/workspace.py", line 63, in f return getattr(workspace, attr)(*args, *kwargs) File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 255, in RunNet StringifyNetName(name), num_iter, allow_fail, File "/usr/local/lib/python3.6/dist-packages/caffe2/python/workspace.py", line 216, in CallWithExceptionIntercept return func(args, *kwargs) RuntimeError: [enforce fail at conv_op_impl.h:38] C == filter.dim32(1) G. 128 vs 64. Convolution op: input channels does not match: # of input channels 128 is not equal to kernel channels group: 641 Error from operator: input: "76" input: "backbone.8.block.0.weight" input: "backbone.8.block.0.bias" output: "77" name: "Conv_14" type: "Conv" arg { name: "strides" ints: 1 ints: 1 } arg { name: "pads" ints: 0 ints: 0 ints: 0 ints: 0 } arg { name: "dilations" ints: 1 ints: 1 } arg { name: "kernels" ints: 1 ints: 1 } arg { name: "group" i: 1 } device_option { device_type: 0 device_id: 0 }

I tried to change the data format, but still liked this. Hope to get your reply, thank you!

RongbaoHan commented 3 years ago

Please tell me how to solve this problem: RuntimeError: Exporting the operator affine_grid_generator to ONNX opset version 10 is not supported. Thanks a lot!

AbdulMoqeet commented 2 years ago

@RongbaoHan @JF-Lee Did you guys succeed?

ybcc2015 commented 2 years ago

hi, how to convert stn to onnx? when i convert stn to onnx, there is a error: RuntimeError: Exporting the operator affine_grid_generator to ONNX opset version 9 is not supported.