Closed edward9112 closed 3 years ago
Hello. The support of IE model v10 is not full. If you give me your model I will fix the error when I have free time.
Awesome! Just sent you the model via email.
The bug was fixed.
Thank you!
Can you as well add the yolo models support? (they produce error now) Can't convert layer : id = 59 , name = predict_conv/BiasAdd/YoloRegion , type = RegionYolo , version = opset1 ! Can't convert IE model v10!
Ок. I will check it next week.
Awesome! And, by the way, person-vehicle-bike-detection-2002 does not do the inference properly - it just generates a bunch of rectangles, although conversion works well. Not sure if it's a model or conversion issue.
I fixed this error today. You have to perform conversion again.
By the way, FP16 models also do not covert, here is what it says:
Unknown element_type = f16 !
Can't convert layer : id = 1 , name = data_mul_/copy_const , type = Const , version = opset1 !
Can't convert IE model v10!
Synet does not support FP16 models (they are for GPU only).
Synet does not support FP16 models (they are for GPU only).
Got it. What about INT8?
INT8 is supported, but network output is not equal to openvino output. There is a rounding error (FP32 to INT8 conversion).
What about FP32-INT8 models?
I am sorry. Heretofore I kept in mind FP32-INT8 models of course. So Synet support FP32-INT8 models, not INT8.
I get the following errors while converting different FP32-INT8 models: Can't convert layer : id = 9 , name = init_block1/dim_inc/conv/fq_input_0 , type = FakeQuantize , version = opset1 !
Can't convert layer : id = 13 , name = L0008_ActivationBin-back_bone_seq.conv2_2_sep_relubin_bin_convBIN01/Quantize , type = Quantize !
The models are: person-detection-retail-0013 pedestrian-detection-adas-binary-0001
I am developing own method of quantization of FP32-INT8 models. Support of conversion of FP32-INT8 Openvino models is dead end way. So I will suport only conversion of FP32 openvino models.
Got it. Is your method also open source?
Of course. See Synet/src/Test/TestQuantization.cpp
TestQuantization says "Can't load param.xml". Is there a list of required arguments?
File 'param.xml' contains additional information to run network in tests (input scale range, etc.). See declaration of TestParam structure in file TestNetwork.h. Each Synet's test contain such file.
Is there a typical sample of the param file? The declaration has many parameters that require some knowledge about the quantizer which most people don't have.
Quantization is complicated process and not fully automized. This operation leads to accuracy degradation and requires test samples. Also it requires independent verification of final accuracy.
Got it, thanks! Any updates on the yolo models? By the way, I tried to convert the yolo v3&4 models from here https://github.com/AlexeyAB/darknet The conversion went well but the inference just didn't work.
Hello, I found your framework recently, and it's amazing! But I have some problem with conversion this model I have the following errors:
Not implemented layer : name = tile1033 ; type = Tile
Not implemented layer : name = decode ; type = CTCGreedyDecoder
Could you help me, please?
Hello, I found your framework recently, and it's amazing! But I have some problem with conversion this model I have the following errors:
Not implemented layer : name = tile1033 ; type = Tile Not implemented layer : name = decode ; type = CTCGreedyDecoder
Could you help me, please?
Thank you for bug report! The bug was fixed. See test_014.
Hello, I have the question on how to use the license plate recognition model That model has 2 inputs and Network::SetInput doesn't work, because there is a condition on 1 input in Synet::SetInput
You have to use function Network::Src and set inputs manually. As an example you can look at SynetNetwork::SetInput in file TestSynet.h.
Basically I can convert only the oldest, early 2019 models.