Open xumengwei opened 3 years ago
Can you post the code snippet here pls?
@sachinkmohan I've attached my code. Basically I want to apply quantization-aware training on TF OD models, but it seems doesn't work. ssd.txt
Instead of giving the model_dir
, try giving filepath
as mentioned in the TF Documentation
filepath='filename.h5'
model = keras.models.load_model(filepath)
@sachinkmohan Nope we're using pb files downloaded from the model zoo, not h5 models.
I couldn't find any results, loading .pb files using load_model
. Just a suggestion, convert .pb
files to h5
files and then try again otherwise just load the .pb
file to the model and execute model.summary()
to verify if you have loaded it correctly.
@sachinkmohan Sorry for the late response. I've tried the way you mentioned but failed. I can load it with tf.saved_model.load, but not into keras format. I think this issue has been mentioned many times such as here. Can you give it a try to load the pre-trained OD model into keras format? It would be a great help to many people.
Has anyone solved this problem?
Has anyone solved this problem?
I'm trying to compress the ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8 model (from model zoo) with tensorflow optimization tool, or more specifically, tensorflow_model_optimization, which supports quantizing TF/Keras models by choosing which to quantize.
However, I had this warning and error:
The inputted SavedModel is converted by object_detection/export_tflite_graph_tf2.py. Anyone know how can I load the OD model into a keras model?