Closed PureHing closed 2 years ago
Hi,
the value in the dictionary stands for scale for now, which means int_value = scale * fp_value, the output_exponent value in config json file shoule be -log2(scale) will export to exponent value in the next version
For your own model, the steps are mostly correct: 1.prepare a float32 model, and convet to onnx model
model_path = 'mnist_model_example.onnx'
calib_dataset = test_images[0:5000:50]
3/4. get the out_exponent from -log2(scale) and then setp3, step4 as you said the step3/4 will be supported in the quantization tool as well for convenience, but it's still testing, you can experiment it by calling _export_coefficient_to_cpp(model, pickle_file_path, target_chip, outputpath, name). a new version will be released soon.
If there is any question or suggestion, please feel free to let us know
@Auroragan
Hi,
In which dynamic library can I locate this function(export_coefficient_to_cpp
), and when will the next version be?
Hi, @PureHing
Please check the latest master branch.
the function is in calibrator, you can refer to the code in example.py:
calib.export_coefficient_to_cpp(model_proto, pickle_file_path, 'esp32s3', '.', 'test_mnist', True)
@Auroragan Exporting finish, the output files are: ./test_mnist.cpp, ./test_mnist.hpp,and Does the .npy files
is necessary for convert.py
?
the purpose of convert.py is to convert coefficients which are in .npy files to .cpp and .hpp, which is the same as _export_coefficient_tocpp function
if you can use _export_coefficient_tocpp to convert, you don't need to use convert.py anymore
Thanks much
@yehangyang Hi, Get the onnx model according to the code,which named![Screenshot_select-area_20210831134128](https://user-images.githubusercontent.com/62579216/131448408-072fa4bb-f43a-4cd6-90f5-c8a0d7af0da0.jpg)
mnist_model_pytorch1.onnx
(removed softmax ):Then,get the
mnist_calib.pickle
by executingquantization_tool/examples/example.py
.mnist_calib.pickle
``` >>> f=open("mnist_calib.pickle",'rb') >>> a=pickle.load(f) >>> a {'9': 16.0, '11': 8.0, 'output': 4.0, 'input': 64.0, '8': 16.0, '10': 8.0, '7': 64.0, 'fc1.weight': array([256.]), 'fc1.bias': 16.0, 'fc2.weight': array([256.]), 'fc2.bias': 8.0, 'fc3.weight': array([128.]), 'fc3.bias': 4.0} >>> ```Does the value(16.0,8.0,...) in the dictionary represent the
output_exponent
value in the config json file?BTW,For my own model, are the following steps correct? 1.prepare a float32 model,and convet to onnx model 2.executing
quantization_tool/examples/example.py
3.get the out_exponent(Can the current tools generate the exponent value?), and write a config.json file 4.executingconvert_tool/convert.py
Thanks!