Closed ulrichMarco closed 3 years ago
Hello,
I am able to reproduce the issue, the problem is the output tensors are fused the right the way.
Are you using TF 2.x to quantise the model. I am not able to quantise with TF2.x in following options.
converter=tf.lite.TFLiteConverter.from_saved_model(saved_model_dir='egdetpu_test/saved_model/') converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8
converter.experimental_new_quantizer = True
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
My suggestion is we need to fix this when you export checkpoints to saved_model or during qunatization..
can you try to quantise with TF2. x
@Naveen-Dodda sorry, i accidently provided the wrong saved model. (the one one generated by exporter_main_v2.py not the one generated by export_tflite_graph_tf2.py)
new zipped files at https://drive.google.com/file/d/1TTDqyeYfGRQPC6zqMQr9dsybxjnFpwPh/view?usp=sharing -in the models folder, are both saved_model's folder name states the exporter i used to generate -in the my_ssd_mobilenet_v2_640 folder are the results of the training with checkpoints and pipeline.config (retrained the ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8)
I have 2 Systems where i try it, both producing the same error
System: Ubuntu 20.04.2 LTS Tensorflow: latest tf-nightly-gpu (2.6), tried 2.4.1 too Python3 Version: 3.8 TPU Device: Coral USB Accelerator
System: OpenSuse Leap 15.2, i know not officially supported, but example works with it, my company want it to work on it tensorflow: none (training and converting on other system) Python3 Version: 3.6 TPU Device: Coral M.2 Accelerator with Dual Edge TPU
if you need more info, just ask
steps i did to produce error:
python3 export_tflite_graph_tf2.py --pipeline_config_path models/my_ssd_mobilenet_v2_640/pipeline.config --trained_checkpoint_dir models/my_ssd_mobilenet_v2_640/ --output_directory exported-models/my_model25/export_tflite_graph_tf2
python3 convert2tflite.py
edgetpu_compiler models/model.tflite -o models
python3 detect_image.py -m models/model_edgetpu.tflite -i A00147.png
Anyting i did wrong or other way to come from trained model to edgetpu tflite model?
@Naveen-Dodda Anything new? Tried multiple ways but doesnt change the error message.
@ulrichMarco
I was able to qunatize, compile and test with pycoral/examples/detect_image.py under these conditions.
TF version == 2.5.0
I used saved_model provided by you edgetpu-test/models/export_tflite_graph_tf2/saved_model. To quantize you need to change the converter setting to following.
converter = tf.lite.TFLiteConverter.from_saved_model("models/export_tflite_graph_tf2/saved_model") converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] converter.inference_input_type = tf.uint8
converter.experimental_new_converter = True converter.experimental_new_quantizer = True
The quantised model can be compiled with edgetpu_compiler 15.0
Hi @Naveen-Dodda, i tried your solution and i am able to quantize. The problem is that the accuracy falls steeply. I also tried to quantize this model -> http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_320x320_coco17_tpu-8.tar.gz and tested with the evaluation set from coco dataset 2017 and the AP still drop.
Could you quantize that model in a better way? If i can give you more infos on what i have do please ask. Thanks
@Naveen-Dodda thanks it works sry for late reply was on vacation.
Hi im using a Tensorflow Saved model which works fine im converting it to .tflite using the following code:
after that i use the edgetpu_compiler
edgetpu_compiler model.tflite
to compile it to model_edgetpu.tflitethen running the detect_images.py results in:
all models and stuff at https://drive.google.com/drive/folders/10Htmw0JWZ31Z47hn6mdZYHY7AkjXMus7?usp=sharing