Open Shubhambindal2017 opened 3 years ago
ok
That's an excellent question. I am facing the same kind of problem where I try to load my Object Detection model from a saved_model and unable to use it as this example.
I found this in the documentation about saved_model :
When you save a tf.Module, any tf.Variable attributes, tf.function-decorated methods, and tf.Modules found via recursive traversal are saved.
I am guessing that somewhere in the SSD-Mobilenetv2 architecture, it implements this tf.Module
. Hope it can give you some hints to find a good explanation.
For me, I just stopped using saved_model
for Object Detection object, I just load the last trained checkpoints of my model to use it.
In addition to this topic... when I've saved tflite it works better with normalization and ONLY with 300x300 like in conf file... why the model has name "SSD-MobilenetV2 320x320"? this is the question...
I have fine-tuned an SSD-Mobilenetv2 with train config fixed resize 300x300 built using tensorflow objection detection API and saved in TF Saved_Model format. Questions:
In simple words - Documentation is not clear - regarding what pre-processing (Resize / Normalization) steps are required to inference from saved_model format. Here too - no pre-processing like resizing and normalization is applied to input image. https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/inference_from_saved_model_tf2_colab.ipynb