Open ItsMeTheBee opened 4 years ago
@tombstone @jch1 @pkulzc I´d be glad about some support on this issue :)
Using the config file & the code from this issue https://github.com/tensorflow/models/issues/9033 I´ve been able to convert a model to tflite with quantization. I´ll compare the config files a bit to find out why it does not work with the ssd networks I trained before. I still think this issue tracker and the code used there should be referenced somewhere / documented / whatever since I had a hard time finding this (because I always searched for quantization since the conversion to tflite itself was not the problem).
@ItsMeTheBee can you share how you converted the model to tflite with me? I am probably having the same error, but I cant even get my tflite model up and running. Thanks!
@DeRealMorgan I literally copied all the steps, skripts etc. mentioned here https://github.com/tensorflow/models/issues/9033#issuecomment-694852560 and trained a network with the config file shown there - that worked well. I did not manage to get it to work with my original mobilenet ssd v2 network or my ssd resnet network but I´ll look into that. You should also read the comment after the linked one to get it to work for the edge tpu because you need to make one small adjustment to the code in the linked comment. For simply converting it to tflite without any optimization you don´t need a representative dataset or the whole int8 stuff so this worked fine for me (only testen on mobilenet ssd v2 models):
import os import numpy as np import tensorflow as tf from tensorflow.keras.models import load_model from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.applications.mobilenet import preprocess_input
converter = tf.lite.TFLiteConverter.from_saved_model('PATH/TO/saved_model')
converter.experimental_new_converter = True
converter.allow_custom_ops = True
tflite_model = converter.convert()
with open('PATH/TO/neuralnet.tflite', 'wb') as f:
f.write(tflite_model)
If you want to test your model this script might come in handy.
Prerequisites
Please answer the following question for yourself before submitting an issue.
1. The entire URL of the documentation with the issue
https://github.com/tensorflow/models/tree/master/research/...
2. Describe the issue
Hey all!
I´m quite sure I´m not on the right track yet which is why I opened this as a documentation issue but I still added some code and output just in case this helps anyone. I´ve trained a SSD MobileNet v2 and a SSD Resnet50_v1_fpn_640x640 to detect custom objects using the TF2 Object Detection API. Because I need to use the Coral Chip to run my models so I´ve been searching for some documentation on how to do post training quantization on models trained with TF2 Object Detection API or just how to get them to run on the Edge TPU in general. Unfortunately I´ve been unable to find anything that works with TF2.
The code I´ve tried (I did play around with the options a bit but since none of them worked I´ll just show you my current version):
import tensorflow as tf
I´m not sure if this is the correct way to build a representative dataset and if I should set
converter.experimental_new_converter = False
or not. Like this I´m getting a lot of letters and numbers as output and in the end this (only copied the end of the output because it´s really long):I don´t know what to do with this output though....
PS: I´m using tf-nightly