ai-techsystems / deepC

vendor independent TinyML deep learning library, compiler and inference framework microcomputers and micro-controllers
https://cainvas.ai-tech.systems/
Apache License 2.0
558 stars 86 forks source link

Uint8 quantisized model throws "struct.error" #120

Open robinvanemden opened 4 years ago

robinvanemden commented 4 years ago

Using WinMLTools floating point 32 into 8-bit integer optimization results in the following error:

Traceback (most recent call last):
  File "/usr/local/bin/onnx-cpp", line 11, in <module>
    load_entry_point('deepC==0.13', 'console_scripts', 'onnx-cpp')()
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/onnx2cpp.py", line 65, in main
    dcGraph = parser.main(onnx_file, bundle_dir, optimize=False, checker=False)
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/read_onnx.py", line 489, in main
    self.addParams(param);
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/read_onnx.py", line 129, in addParams
    param_vals = struct.unpack(pack_format*param_len, param.raw_data) ;
struct.error: unpack requires a buffer of 432 bytes

The traceback seems to indicate that deepC ought to be able to convert the model, but encounters a minor issue - would you agree? See attached the uint8 optimized Resnet Cifar model we used to test the 8-bit integer quantisized model.

model.zip

github-actions[bot] commented 4 years ago

Thank you so much for filing the issue. We will look at it and take appropriate action as soon as possible.' first issue

srohit0 commented 4 years ago

Hello @robinvanemden, this model is written with IR_version 3, which is over 2 years old. deepC supports onnx 1.5 that accepts IR_Version 4 and above.

% compile-onnx model.onnx
Model info:
  ir_vesion :  3 
  doc       : 
...
...
Traceback (most recent call last):
  File "/home/aits/WORK/deepC/deepC/compiler/onnx2exe.py", line 98, in <module>
    sys.exit(main())
  File "/home/aits/WORK/deepC/deepC/compiler/onnx2exe.py", line 87, in main
    (bundleDir, cppFile) = onnx2cpp.main();
  File "/home/aits/WORK/deepC/deepC/compiler/onnx2cpp.py", line 65, in main
    dcGraph = parser.main(onnx_file, bundle_dir, optimize=False, checker=False)
  File "/home/aits/WORK/deepC/deepC/compiler/read_onnx.py", line 493, in main
    dnnc_param = self.addParams(param, saveInput=saveInput)
  File "/home/aits/WORK/deepC/deepC/compiler/read_onnx.py", line 126, in addParams
    param_vals = struct.unpack(pack_format*param_len, param.raw_data) 
struct.error: unpack requires a buffer of 432 bytes

Do you have newer version of this model? If not, please use onnx version converter and try again.

robinvanemden commented 4 years ago

Thanks for you fast response! I actually converted the model down - I will try again with the higher IR version.

robinvanemden commented 4 years ago

My apologies for not following up faster - but see attached an updated version of the model, which seems to throw the same error.

model.zip