openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.08k stars 2.22k forks source link

Some errors occurred when using PTQ under adding custom ops! #9799

Closed DwenGu closed 2 years ago

DwenGu commented 2 years ago
System information (version)
Detailed description

I have already implemented the custom op, and I can successfully infer the outputs when adding the custom op extensions files. Like that:

65a4e7771443fa26627a88038d97e01

When using the PQT python api to quantize the network, some errors ocuured as follows.

c139c5b4f653ff192683bcac586c408

I followed the guide https://github.com/openvinotoolkit/openvino_notebooks/blob/main/notebooks/111-detection-quantization/111-detection-quantization.ipynb as followed.

59f82e814d28254afe071f6f69c9a00

Thanks, DwenGu.

jgespino commented 2 years ago

Hi @DunguTmp

I'll need to investigate further. Could you please share the original model, IR (xml & bin) and the extension files for your custom ops?

Regards, Jesus

DwenGu commented 2 years ago

Hi @DunguTmp

I'll need to investigate further. Could you please share the original model, IR (xml & bin) and the extension files for your custom ops?

Regards, Jesus

Hi, @jgespino : https://drive.google.com/file/d/16jNEDpN9Ii0vFdZ605S2rWW86onif1Se/view?usp=sharing is my IR and the extension files.

BR, DwenGu.

hannhu commented 2 years ago

Get the same error with a custom operation as well when I try to convert onnx model with Model Optimizer. The error occurred only in the master branch. With 2021.4 OpenVino installed by binary, I can successfully convert the same model.

DwenGu commented 2 years ago

Get the same error with a custom operation as well when I try to convert onnx model with Model Optimizer. The error occurred only in the master branch. With 2021.4 OpenVino installed by binary, I can successfully convert the same model.

I can successfully convert my torchnet to onnx.

hannhu commented 2 years ago

Get the same error with a custom operation as well when I try to convert onnx model with Model Optimizer. The error occurred only in the master branch. With 2021.4 OpenVino installed by binary, I can successfully convert the same model.

I can successfully convert my torchnet to onnx.

With the master branch ?

DwenGu commented 2 years ago

Get the same error with a custom operation as well when I try to convert onnx model with Model Optimizer. The error occurred only in the master branch. With 2021.4 OpenVino installed by binary, I can successfully convert the same model.

I can successfully convert my torchnet to onnx.

With the master branch ?

I installed the OpenVino with my windows platform. Follow the link https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html .

hannhu commented 2 years ago

Get the same error with a custom operation as well when I try to convert onnx model with Model Optimizer. The error occurred only in the master branch. With 2021.4 OpenVino installed by binary, I can successfully convert the same model.

I can successfully convert my torchnet to onnx.

With the master branch ?

I installed the OpenVino with my windows platform. Follow the link https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit-download.html .

Okay, I only get this error with the latest OpenVino.

jgespino commented 2 years ago

Hi @DunguTmp

Apologies for the delay, I haven't been able to resolve the error message. I also attempted using the Post-Training Optimization Tool with a simple config file but ran into the error below.

I'll need to reach out to the development team for additional guidance. I will let you know what I find out.

pot -c simple.json
15:22:52 accuracy_checker WARNING: c:\users\jgespino\appdata\local\programs\python\python37\lib\site-packages\defusedxml\__init__.py:30: DeprecationWarning: defusedxml.cElementTree is deprecated, import from defusedxml.ElementTree instead.
  from . import cElementTree

15:22:53 accuracy_checker WARNING: C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\tools\post_training_optimization_toolkit\compression\algorithms\quantization\optimization\algorithm.py:39: UserWarning: Nevergrad package could not be imported. If you are planning to useany hyperparameter optimization algo, consider installing itusing pip. This implies advanced usage of the tool.Note that nevergrad is compatible only with Python 3.6+
  'Nevergrad package could not be imported. If you are planning to use'

INFO:app.run:Output log dir: ./results\4IGPU_Bwarp_Fp16_DefaultQuantization\2022-01-27_15-22-53
INFO:app.run:Creating pipeline:
 Algorithm: DefaultQuantization
 Parameters:
        preset                     : performance
        stat_subset_size           : 300
        target_device              : CPU
        model_type                 : None
        dump_intermediate_model    : False
        exec_log_dir               : ./results\4IGPU_Bwarp_Fp16_DefaultQuantization\2022-01-27_15-22-53
 ===========================================================================
Traceback (most recent call last):
  File "C:\Users\jgespino\AppData\Local\Programs\Python\Python37\Scripts\pot-script.py", line 11, in <module>
    load_entry_point('pot', 'console_scripts', 'pot')()
  File "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\tools\post_training_optimization_toolkit\app\run.py", line 36, in main
    app(sys.argv[1:])
  File "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\tools\post_training_optimization_toolkit\app\run.py", line 60, in app
    metrics = optimize(config)
  File "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\tools\post_training_optimization_toolkit\app\run.py", line 121, in optimize
    model = load_model(config.model)
  File "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\tools\post_training_optimization_toolkit\compression\graph\model_utils.py", line 26, in load_model
    return NXModel(config=model_config)
  File "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\tools\post_training_optimization_toolkit\compression\graph\nx_model.py", line 45, in __init__
    self._from_config(kwargs['config'])
  File "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\tools\post_training_optimization_toolkit\compression\graph\nx_model.py", line 63, in _from_config
    self._models.append({'model': load_graph(model_config)})
  File "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\tools\post_training_optimization_toolkit\compression\graph\graph_utils.py", line 38, in load_graph
    graph_from_ir, meta_data = stdout_redirect(restore_graph_from_ir, xml_path, bin_path)
  File "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\tools\post_training_optimization_toolkit\compression\utils\logger.py", line 129, in stdout_redirect
    res = fn(*args, **kwargs)
  File "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\mo\utils\ir_reader\restore_graph.py", line 39, in restore_graph_from_ir
    new_graph = copy_graph_with_ops(ir.graph)
  File "C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\mo\utils\ir_reader\layer_to_class.py", line 360, in copy_graph_with_ops
    'please check it!'.format(op_type)
AssertionError: Operation BackwardWarp not found in MO operations, please check it!

Regards, Jesus

Ref. 77257

jgespino commented 2 years ago

Hi @DunguTmp

Apologies for the delay, the development team has informed me they've made changes in the IR frontend that should help with custom layers. These changes are included the master branch of OpenVINO.

We have a pre-release available for OpenVINO 2022.1, could you please try converging your model to IR with this release and using the POT to quantize it?

To install the pre-release using python pip run the following command: pip install openvino-dev==2022.1.0.dev20220131

Please let me know if you run into any issues.

Regards, Jesus

jgespino commented 2 years ago

Closing, please re-open if additional assistance is needed.

hannhu commented 2 years ago

你好,我已近收到了,我会尽快处理。