fastmachinelearning / qonnx

QONNX: Arbitrary-Precision Quantized Neural Networks in ONNX
https://qonnx.readthedocs.io/
Apache License 2.0
124 stars 39 forks source link

TypeError: __init__() takes from 3 to 5 positional arguments but 6 were given when using convert_from_onnx_model #147

Open Sayanti17 opened 1 week ago

Sayanti17 commented 1 week ago

I encountered a TypeError when trying to convert an ONNX model using hls4ml. It seems that too many arguments are being passed to the ModelGraph constructor. Traceback (most recent call last): File "/home/gy2456/sayanti_hls/example-models/hls_onnx_check1.py", line 56, in hls_model = hls4ml.converters.convert_from_onnx_model(onnx_model, hls_config=config) File "/home/gy2456/.local/lib/python3.9/site-packages/hls4ml/converters/init.py", line 366, in convert_from_onnx_model return onnx_to_hls(config) File "/home/gy2456/.local/lib/python3.9/site-packages/hls4ml/converters/onnx_to_hls.py", line 318, in onnx_to_hls hls_model = ModelGraph(config, reader, layer_list, input_layers, output_layers) TypeError: init() takes from 3 to 5 positional arguments but 6 were given Use the following code to convert an ONNX model:

python script: onnx_model_path = 'CNV_1W2A_updated.onnx'
onnx_model = onnx.load(onnx_model_path) config = hls4ml.utils.config_from_onnx_model(onnx_model) hls_model = hls4ml.converters.convert_from_onnx_model(onnx_model, hls_config=config)

Expected Behavior:

The ONNX model should be converted to an HLS model without errors.

Environment:

Python version: 3.9
hls4ml version: 0.8.1
ONNX model: took a model from https://github.com/fastmachinelearning/qonnx?tab=readme-ov-file