TexasInstruments / edgeai-torchvision

This repository has been moved. The new location is in https://github.com/TexasInstruments/edgeai-tensorlab
https://github.com/TexasInstruments/edgeai
Other
70 stars 21 forks source link

qat model don't work using TIDL_j7_01_03_00_11 #10

Closed qinduanyinghua closed 2 years ago

qinduanyinghua commented 2 years ago

🐛 Describe the bug

hello,

  1. i use QuantTrainModule in this repo to train my model, then i use TIDL-07_01_00_11 to quantize model, finally, i use quantized model to infer, the result is very poor. But when i use TIDL-08_01_00_13 quantizing model, the result is normal. what is difference between them ?
  2. For some reasons, i have to use TIDL-07-01, how can i solve it ? Thank you.

Versions

hello,

  1. i use QuantTrainModule in this repo to train my model, then i use TIDL-07_01_00_11 to quantize model, finally, i use quantized model to infer, the result is very poor. But when i use TIDL-08_01_00_13 quantizing model, the result is normal. what is difference between them ?
  2. For some reasons, i have to use TIDL-07-01, how can i solve it ? Thank you.
qinduanyinghua commented 2 years ago

i solved it by convert trained model to onnx file by setting opset=9, but not 11.

TTLL0928 commented 2 years ago

Hello, do you have encounter the error "ONNX operator Constant is not suported now.. By passing [libprotobuf FATAL ./google/protobuf/repeated_field.h:1537] CHECK failed: (index) < (currentsize): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) < (currentsize): Aborted (core dumped)" when onnx exported by model that use QuantTrainModule? I use TIDL_07_02

alexchungio commented 1 year ago

@qinduanyinghua i meet the similar problem, with the TIDL model in TIDL-08_01_00_07, the ccuracy is down about 10 point. and i try your method, its not work. Are there any other points needing attention。