Closed syedaffanhamdani closed 5 years ago
Hi! Thank you for the bug report! I have submitted a fix for the described problem. https://github.com/KhronosGroup/NNEF-Tools/commit/846faef8422036f0115208c15bb6a9c23516cfc6 Please pull the changes, reinstall the parser, and try again:
git pull
cd parser/python
python setup.py install
If the problem persists or you have further questions, don't hesitate to reach out again. Kind Regards, Tamás Danyluk
Hi! Many thanks for the fix but after pulling the latest code and reinstalling the parser, I am getting the following error:
Traceback (most recent call last):
File "/home/ubuntu/NNEF-Tools/nnef_tools/convert.py", line 554, in <module>
convert_using_argv(sys.argv)
File "/home/ubuntu/NNEF-Tools/nnef_tools/convert.py", line 542, in convert_using_argv
conversion_info=args.conversion_info)
File "/home/ubuntu/NNEF-Tools/nnef_tools/convert.py", line 394, in convert
custom_converters=custom_converters))
File "/home/ubuntu/NNEF-Tools/nnef_tools/convert.py", line 311, in convert_using_premade_objects
target_graph, conv_info = converter(source_graph)
File "/home/ubuntu/NNEF-Tools/nnef_tools/conversion/converter.py", line 138, in __call__
target_graph = self.convert_graph(source_graph)
File "/home/ubuntu/NNEF-Tools/nnef_tools/conversion/tensorflow/nnef_to_tf.py", line 116, in convert_graph
target_graph = super(Converter, self).convert_graph(source_graph) # type: TFGraph
File "/home/ubuntu/NNEF-Tools/nnef_tools/conversion/converter.py", line 133, in convert_graph
self.convert_operations(source_graph, target_graph)
File "/home/ubuntu/NNEF-Tools/nnef_tools/conversion/converter.py", line 90, in convert_operations
self.convert_operation(source_op, target_graph)
File "/home/ubuntu/NNEF-Tools/nnef_tools/conversion/converter.py", line 103, in convert_operation
assert False, "No converter for operation '{}'".format(source_op.name)
AssertionError: No converter for operation 'linear_quantize'
Please guide what could be the possible source. Regards Affan
The converter is currently not prepared for converting quantize operations in the graph, which happens in case of dynamic quantization. Is that what you want to have? In the other issue (#81) , it seems that you would like static quantization. Can you please elaborate what you need exactly?
Thanks. I want to perform static and dynamic quantization during conversion. As Post Training Qunatization suggests that " weights and activation tensors in trained neural network models tend to have values that are distributed across comparatively small ranges (e.g. -15 to +15 for weights or -500 to 1000 for image model activations)." I wish to apply these scales in static quantization and see the effect.
Dynamic quantization would also be interesting but as you wrote already, perhaps it is not supported yet.
I understand you want to do quantization, but why do you need to convert quantization operations for that? Once you have done your quantization, the final graph does not contain quantization ops, even if you did quantized training, when you prepare your graph for inference, the quantization parameters are typically baked in. If you have your model saved in TF lite format, you can convert that to NNEF. Furthermore, I don't understand why you want to convert back from NNEF to TF in you use case. How do you want to use the NNEF file? What engine do you want to use to execute it?
Can I close this?
While experimenting with dynamic quantization I am getting this error. Input format is NNEF and output is Tensorflow-pb. Dynamic quantization is performed in the graph.nnef on the activation layer by finding the max _reduce.
complete stack trace:
I also tried with linear quantization but error is the same. Many thanks in advance.