Open robinvanemden opened 4 years ago
Thank you so much for filing the issue. We will look at it and take appropriate action as soon as possible.' first issue
Hello @robinvanemden, this model is written with IR_version 3, which is over 2 years old. deepC supports onnx 1.5 that accepts IR_Version 4 and above.
% compile-onnx model.onnx
Model info:
ir_vesion : 3
doc :
...
...
Traceback (most recent call last):
File "/home/aits/WORK/deepC/deepC/compiler/onnx2exe.py", line 98, in <module>
sys.exit(main())
File "/home/aits/WORK/deepC/deepC/compiler/onnx2exe.py", line 87, in main
(bundleDir, cppFile) = onnx2cpp.main();
File "/home/aits/WORK/deepC/deepC/compiler/onnx2cpp.py", line 65, in main
dcGraph = parser.main(onnx_file, bundle_dir, optimize=False, checker=False)
File "/home/aits/WORK/deepC/deepC/compiler/read_onnx.py", line 493, in main
dnnc_param = self.addParams(param, saveInput=saveInput)
File "/home/aits/WORK/deepC/deepC/compiler/read_onnx.py", line 126, in addParams
param_vals = struct.unpack(pack_format*param_len, param.raw_data)
struct.error: unpack requires a buffer of 432 bytes
Do you have newer version of this model? If not, please use onnx version converter and try again.
Thanks for you fast response! I actually converted the model down - I will try again with the higher IR version.
My apologies for not following up faster - but see attached an updated version of the model, which seems to throw the same error.
Using WinMLTools floating point 32 into 8-bit integer optimization results in the following error:
The traceback seems to indicate that deepC ought to be able to convert the model, but encounters a minor issue - would you agree? See attached the uint8 optimized Resnet Cifar model we used to test the 8-bit integer quantisized model.
model.zip