Closed ramkumarkoppu closed 3 years ago
In develop-tf branch, the onnx file generated by training is directly quantized by ai8xize synthesis tool with --scale specifying the quantization scale. Using --generate-dequantized-onnx-file, another onnx file that is dequantized is generated to be used for evaluation. the gen-tf-demos-max78000.sh script I nai8x-synthesis develop-tf branch includes all required flags. Please check Post-Training Model Quantization and MAX78000 sections in README in ai8x-Training/TensorFlow for more info.
quantize.py contains code to quantize PyTorch model but couldn't find equivalent script to quantize Tensorflow model as per model training in develop-tf branch of ai8x-training repository. What is the script to quantize Tensorflow model for MAX78000 device?