Closed cattmi closed 4 years ago
@cattmi
Please, fill issue template..
I am not able to open the link you shared.Request you to share colab link or complete code snippet with supporting files to reproduce the issue in our environment.It helps us in localizing the issue faster.Thanks!
I have the same problem.
Running on Colab, see notebook below.
To easily reproduce:
detection_model.build((640, 640, 3))
tf.keras.models.save_model(detection_model, 'savemodel')
My error message:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-187-08c2245c2768> in <module>()
----> 1 tf.keras.models.save_model(detection_model, 'savedmodel_batch32_1000step')
78 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/tf_utils.py in get_reachable_from_inputs(inputs, targets)
138 outputs = x.consumers()
139 else:
--> 140 raise TypeError('Expected Operation, Variable, or Tensor, got ' + str(x))
141
142 for y in outputs:
TypeError: Expected Operation, Variable, or Tensor, got block4
I have tried in colab with TF version 2.2 and was able to reproduce the issue.Please, find the gist here.Thanks!
The detection_model here is a python instance (SSDMetaArch) instead of Keras model instance, so you can't use .save or keras.save_model to save it.
Aha..
Many Thanks for promptly resolving ..
Mike
On 15 Jul 2020, at 00:07, pkulzc notifications@github.com wrote:
The detection_model here is a python instance (SSDMetaArch) instead of Keras model instance, so you can't use .save or keras.save_model to save it.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tensorflow/models/issues/8862#issuecomment-658455968, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHDLGM3RB4IBXPDFW25JDW3R3TQMRANCNFSM4OZFKXIQ.
In the previous tensorflow 1.x object detection api there was an option to export the model for tflite inference after removing the post processing stage. Is there a similar functionality for the new exporter? How to export a model till a specific node? I see that the saved_model when loaded using either tf.saved_model.load or keras.load haven't got the prune function, is there any other similar script to do it?
@cattmi
Please, close this thread if your issue was resolved.Thanks!
@sambhusuryamohan , we should probably start a separate thread somewhere appropriate on resolving saving and loading / export to tf lite of the python coded models as this is important topic, but I will close this thread as 'resolved': i.e. we know model.save can only work with keras implemented, not python, models..
The detection_model here is a python instance (SSDMetaArch) instead of Keras model instance, so you can't use .save or keras.save_model to save it.
Then how to save SSDMetaArch instead?
Is there seriously no way to export a detection model to TFLite in TF2?
The detection_model here is a python instance (SSDMetaArch) instead of Keras model instance, so you can't use .save or keras.save_model to save it.
Then how to save SSDMetaArch instead?
I would also like to know which steps to take to save the model when we have it as SSDMetaArch instance. Do I need to convert it into a keras instance before saving or is there a way to save it directly as SSDMetaArch instance ?
You can save your detection model by building it, then using tf saved_model to save it directly. SSDMetaArch
Can be saved this way.
detection_model.build((640, 640, 3))
tf.saved_model.save(detection_model, 'model_name', signatures=None, options=None)
@vermavinay982 how to load the saved .pb model for this SSDMetaArch and run inference on it? If i use , tf.saved_model.load() and then trying to run inference on the saved dataset , but i am getting below error.
_AttributeError: 'UserObject' object has no attribute 'preprocess'
You can save your detection model by building it, then using tf saved_model to save it directly.
SSDMetaArch
Can be saved this way.detection_model.build((640, 640, 3)) tf.saved_model.save(detection_model, 'model_name', signatures=None, options=None)
Can be saved but without signature functions, we can't use the model to run inference. See models/research/object_detection/exporter_lib_v2.py line 278.
You can refer above notebook. This notebook gives everything from training to saving models, If you don't require tflite still in output checkpoints will be available. I couldn't find any support for saving SSDMetaArch Models - ckpt_manager
should be there as it is there in tflite notebook.
Since the model is not Keras model, it is SSDMetaArch, So you can try saving the checkpoint and the config file. Build the model and load from the checkpoint to restore the weights. I found this solution, maybe not the best but it works. If you have a better one please share.
# Save new pipeline config
new_pipeline_proto = config_util.create_pipeline_proto_from_configs(configs)
config_util.save_pipeline_config(new_pipeline_proto, '/content/new_config')
exported_ckpt = tf.compat.v2.train.Checkpoint(model=detection_model)
ckpt_manager = tf.train.CheckpointManager(
exported_ckpt, directory="test_data/checkpoint/", max_to_keep=5)
print('Done fine-tuning!')
ckpt_manager.save()
print('Checkpoint saved!')
Loading the model from checkpoint
# Reference for Loading the model from checkpoint
print('Building model and restoring weights for fine-tuning...', flush=True)
num_classes = 1
pipeline_config = '/content/new_config/pipeline.config'
checkpoint_path = 'test_data/checkpoint/ckpt-1'
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
model_config = configs['model']
x = model_builder.build(model_config=model_config, is_training=True)
參考這個網站,可以這樣 https://www.tensorflow.org/lite/guide/signatures
@tf.function(input_signature=[tf.TensorSpec(shape=[None,640,640,3], dtype=tf.float32)])
def detect(input_tensor):
preprocessed_image, shapes = detection_model.preprocess(input_tensor)
prediction_dict = detection_model.predict(preprocessed_image, shapes)
return detection_model.postprocess(prediction_dict, shapes)
tf.saved_model.save(
detection_model , 'your_path_to_save_model' ,
signatures={
'detect': detect.get_concrete_function()
})
new_model=tf.saved_model.load( 'your_path_to_save_model')
detections= new_model.signatures[ 'detect' ]( "your_detection_img_tensor" )
@wayne931121 It worked, the pb file is generated and it detecting perfectly. Thanks
You can save your detection model by building it, then using tf saved_model to save it directly.
SSDMetaArch
Can be saved this way.detection_model.build((640, 640, 3)) tf.saved_model.save(detection_model, 'model_name', signatures=None, options=None)
Tried saving the model with tf.saved_model.save()
Facing this issue:
ValueError: Weights for model ssd_mobile_net_v1_keras_feature_extractor have not yet been created. Weights are created when the Model is first called on inputs or build()
is called with an input_shape
.
Hi all,
I'm trying to use the TF Object Detection API without codes like "exporter_main_tf.py", because I'm building a platform. I don't want to run os.cmd("python exporter_main_tf.py --dir" )
kind of codebase
Try wrapping the model around Tensorflow model class and save it with save_model. I've researched on this problem way too long, all github issues and stackoverflow, this is the only way.
import tensorflow as tf
from object_detection.utils import config_util
from object_detection import exporter_lib_v2
import numpy as np
# Load the model configuration and checkpoint
pipeline_config = 'path/to/pipeline.config'
checkpoint_path = 'path/to/checkpoint'
output_directory = 'path/to/save/model'
# Load the pipeline configuration
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
# Build the detection model
model = exporter_lib_v2.build_model(configs['model'], is_training=False)
# Restore the checkpoint
checkpoint = tf.train.Checkpoint(model=model)
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
class DetectionModule(tf.Module):
def __init__(self, model):
super(DetectionModule, self).__init__()
self.model = model
@tf.function(input_signature=[tf.TensorSpec(shape=[1, None, None, 3], dtype=tf.float32, name='input_image'),
])
def predict(self, input_image):
#input_image = tf.cast(input_image, dtype=tf.uint8) # Convert input image from float to uint8
preprocessed_image, shapes = model.preprocess(input_image)
prediction_dict = model.predict(preprocessed_image, shapes)
detections = model.postprocess(prediction_dict, shapes)
return detections
model_module = DetectionModule(model)
tf.saved_model.save(model_module, export_dir=model_path)
`class ItemModel(tf.keras.Model): def init(self, base_model = "https://tfhub.dev/google/nnlm-en-dim50/2", embedding_dimension=32): super().init() self.base_model = hub.KerasLayer(base_model, input_shape=[],trainable=True)#, self.lin = tf.keras.layers.Dense(embedding_dimension) def call(self,inputs):
features = self.base_model([inputs])
# print(features)
embs = self.lin(features)
return embs
item_model = ItemModel()`
i m trying to get embedding for text string , on tfds dataset . but tfds dataset return tf.Tensor i am getting following error when i m passing sample/all data to model using
data.batch(1).map(item_model)
----------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[51], line 1 ----> 1 print(item_model(items.batch(1))) File /opt/conda/envs/tf/lib/python3.8/site-packages/keras/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs) 67 filtered_tb = _process_traceback_frames(e.__traceback__) 68 # To get the full stack trace, call: 69 #
tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
Cell In[49], line 22, in ItemModel.call(self, inputs)
20 def call(self,inputs):
21 # ip = tf.cast(inputs, tf.string)
---> 22 features = self.base_model([inputs])
23 # print(features)
24 embs = self.lin(features)
File /opt/conda/envs/tf/lib/python3.8/site-packages/tensorflow_hub/keras_layer.py:234, in KerasLayer.call(self, inputs, training)
228 # ...but we may also have to pass a Python boolean for training
, which
229 # is the logical "and" of this layer's trainability and what the surrounding
230 # model is doing (analogous to tf.keras.layers.BatchNormalization in TF2).
231 # For the latter, we have to look in two places: the training
argument,
232 # or else Keras' global learning_phase
, which might actually be a tensor.
233 if not self._has_training_argument:
--> 234 result = f()
235 else:
236 if self.trainable:
ValueError: Exception encountered when calling layer "keras_layer_13" " f"(type KerasLayer).
When inpt_signature is provided, all inputs to the Python function must be convertible to tensors:
inputs: (
[
TF_2_2_colab_object_detection_20200713b_TF2_2_cut_down.zip
Prerequisites
Please answer the following questions for yourself before submitting an issue.
1. The entire URL of the file you are using
https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/eager_few_shot_od_training_tf2_colab.ipynb
Please see attached zip file with jupyter notebook based on the 'eager_few_shot_od_training_tf2_colab.ipynb' authored by 'Tombstone' at google. This is currently inaccessible at time of posting, but is listed under: https://github.com/tensorflow/models/tree/master/research/object_detection/colab_tutorials
Differences:
Migrated to Jupyter notebook in newly created Tensorflow 2.2 environment
Apologies: Should be under 'RESEARCHmodels'
2. Describe the bug
A clear and concise description of what the bug is: Model trains successfully but both 1) model.save in form detection_model.save(model_dest)
2) tf.saved_mode.save() in form tf.saved_model.save( detection_model, model_dest, signatures=None, options=None ) both yield errors
3. Steps to reproduce
Steps to reproduce the behavior.
yields output:
AttributeError: 'SSDMetaArch' object has no attribute 'save'
while ``# saved_model trained model
model_directory = '/home/michael/jupyter_notebooks_TF_2_2/models/' model_name = 'TF_2_2_colab_DOT_object_detection_20200713a' model_dest = os.path.join(os.sep, model_directory, model_name)
tf.saved_model.save(to_export, '/tmp/adder')
tf.saved_model.save( detection_model, model_dest, signatures=None, options=None )
TypeError Traceback (most recent call last)