Right now we grab layer names and run them through a session.run c++ call, but in python you can grab the signature_def by name and simply call the result as a function. The outputs are returned as a dictionary with the corresponding key-value pairs defined in the signature.
For example, from a model exported via an estimator:
signature_def['serving_default']:
The given SavedModel SignatureDef contains the following input(s):
inputs['image'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 96, 96, 3)
name: image:0
The given SavedModel SignatureDef contains the following output(s):
outputs['output'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 10)
name: sequential/dense/BiasAdd:0
Method name is: tensorflow/serving/predict
Grab "serving_default" and you get a function back, call the function, passing in the tensor that will correspond to inputs['image'], and you'll get back a dictionary of tensors, one of the keys being 'output'. Pretty awesome, and no sessions.
Here's the python:
def predict(self, dataset, op_definition):
"""Runs a demo prediction task over the loaded dataset"""
self.logger.info('Running prediction over sample dataset')
model_fn = self.model.signatures[self.task.model_op]
num_correct = 0
for (feature, label) in dataset.as_numpy_iterator():
# How to generalize this? We might need to reshape in some cases
# How to relate input names with tf dataset?
if 'shape' in op_definition['inputs'][0]:
input_shape = op_definition['inputs'][0]['shape']
feature = tf.reshape(feature, input_shape)
outputs = model_fn(feature)
outputs_definition = op_definition['outputs']
output_name = outputs_definition[0]['name']
predicted_class = np.argmax(outputs[output_name][0])
if predicted_class == label:
num_correct += 1
self.logger.info(f'Predicted model class is {predicted_class} and expected class is {label}')
Open question: Variable initialization. We do this manually via the global variables initializer but a keras model exported with save and format='tf' you also get this:
signature_def['__saved_model_init_op']:
The given SavedModel SignatureDef contains the following input(s):
The given SavedModel SignatureDef contains the following output(s):
outputs['__saved_model_init_op'] tensor_info:
dtype: DT_INVALID
shape: unknown_rank
name: NoOp
Method name is:
I've noted in the 1.15 build that the saved_model loading code will automatically execute that __saved_model_init_op if it exists, but I don't think it automatically executes...
Wait, we don't actually execute any global variables initializer in the c++ code. Let me double check this.
Right now we grab layer names and run them through a session.run c++ call, but in python you can grab the signature_def by name and simply call the result as a function. The outputs are returned as a dictionary with the corresponding key-value pairs defined in the signature.
For example, from a model exported via an estimator:
Grab "serving_default" and you get a function back, call the function, passing in the tensor that will correspond to inputs['image'], and you'll get back a dictionary of tensors, one of the keys being 'output'. Pretty awesome, and no sessions.
Here's the python:
Open question: Variable initialization. We do this manually via the global variables initializer but a keras model exported with save and format='tf' you also get this:
I've noted in the 1.15 build that the saved_model loading code will automatically execute that __saved_model_init_op if it exists, but I don't think it automatically executes...
Wait, we don't actually execute any global variables initializer in the c++ code. Let me double check this.