fastmachinelearning / qonnx

QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
https://qonnx.readthedocs.io/
Apache License 2.0
124 stars 39 forks source link

Unsupported Operation Type Error: BipolarQuant in hls4ml Conversion #146

Open Sayanti17 opened 1 week ago

Sayanti17 commented 1 week ago

I encountered an error while using hls4ml to convert an ONNX model. The relevant portion of the error message is as follows: File "/home/gy2456/sayanti_hls/example-models/hls_onnx_check1.py", line 27, in hls_model = hls4ml.converters.convert_from_onnx_model(onnx_model, hls_config=config) File "/home/gy2456/sayanti_hls/example-models/venv_say/lib64/python3.9/site-packages/hls4ml/converters/init.py", line 366, in convert_from_onnx_model return onnx_to_hls(config) File "/home/gy2456/sayanti_hls/example-models/venv_say/lib64/python3.9/site-packages/hls4ml/converters/onnx_to_hls.py", line 281, in onnx_to_hls raise Exception(f'ERROR: Unsupported operation type: {node.op_type}') Exception: ERROR: Unsupported operation type: BipolarQuant

Steps to Reproduce

I ran the conversion script which is a simple python script:

import hls4ml import onnx import onnx from onnx import helper Load your ONNX model

onnx_model = onnx.load("modelkeras.onnx") import onnx from onnx import helper import qonnx Load the ONNX model

onnx_model = onnx.load("your_model.onnx")

qonnx-convert modelkeras.onnx --output-file modelkeras_qcdq.onnx --output-style qcdq

onnx_model_path = 'modelkeras_qcdq.onnx'

onnx_model_path = 'CNV_1W2A_updated.onnx' # Replace with your ONNX model path

onnx_model = onnx.load(onnx_model_path)

onnx.checker.check_model(onnx_model)

Step 3: Convert ONNX model to HLS4ML configuration

config = hls4ml.utils.config_from_onnx_model(onnx_model) Optionally, print out the configuration for verification

print(config) Step 4: Convert to HLS4ML model

hls_model = hls4ml.converters.convert_from_onnx_model(onnx_model, hls_config=config)

The script attempts to convert an ONNX model with the configuration specified in hls_config. The error is raised during the conversion process due to the unsupported operation type BipolarQuant and Quant.

Expected Behavior

I expected the hls4ml library to successfully convert the ONNX model without encountering an unsupported operation type error. Environment

hls4ml Version: [0.8.1] Python Version: 3.9 onnx version: 1.17.0 OS: running on Linux VM ONNX Model Details: it's an existing pretrained model in this path: https://github.com/fastmachinelearning/qonnx?tab=readme-ov-file

Additional Information

If there are any known workarounds or if the operation type Quant and BipolarQuant are planned to be supported in future versions, please let me know. Any guidance on how to proceed or fix this issue would be appreciated.

Thank you!