google-research / text-to-text-transfer-transformer

Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
https://arxiv.org/abs/1910.10683
Apache License 2.0
6.15k stars 756 forks source link

SavedModel export for ML Cloud serving #31

Closed anatoly-khomenko closed 4 years ago

anatoly-khomenko commented 4 years ago

I have fine-tuned the 3B model in Colab notebook provided in this repo as notebooks/t5-trivia.ipynb. After fine-tuning as recommended in this notebook, I would like to export my model as SavedModel to be served by ML Cloud.

To do this I use the following code fragment placed as the last cell in the notebook (so I have model already created):

vocabulary = t5.data.SentencePieceVocabulary(t5.data.DEFAULT_SPM_PATH)
estimator = model.estimator(vocabulary)

your_feature_spec = {
    # "inputs": tf.FixedLenFeature([], dtype=tf.string, default_value=""),
    "inputs": tf.VarLenFeature(dtype=tf.string),
}

def _serving_input_receiver_fn():
    serialized_tf_example = tf.placeholder(dtype=tf.string, shape=None, 
                                           name='inputs')
    # key (e.g. 'examples') should be same with the inputKey when you 
    # buid the request for prediction
    receiver_tensors = {'inputs': serialized_tf_example}
    features = tf.parse_example(serialized_tf_example, your_feature_spec)
    return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)

estimator.export_savedmodel(os.path.join(MODEL_DIR, "saved_model/"), _serving_input_receiver_fn)

I get the following error when executing this code:

image

Complete stack-trace is here: stack-trace-SavedModel.txt

I assume that the error tells that this model is only suitable for TPU inference and is not supposed to work on ML Cloud (where you have only CPU instances available).

Is this a feasible task to make this model served by ML Cloud and if so what are the steps I should follow to accomplish that?

Thank you!

adarob commented 4 years ago

I have pushed some changes to add a mtf_model.export call. However, I believe I will need to update both the mesh and t5 packages to get everything working for you. I'll try to test this today to and update the packages.

reedxiao commented 4 years ago

@adarob
Thanks for the great work!

I was using the t5 version 0.2.0 , and I tried to deploy an exported sample model to GCP ML engine (single core cpu) but with no luck. The error was

Create Version failed. Model validation failed: SavedModel must contain exactly one metagraph with tag: serve For more information on how to export Tensorflow SavedModel, see https://www.tensorflow.org/api_docs/python/tf/saved_model.

I used saved_model_cli to inspect the exported file, and I guess the problem was due to the second 'serve' tag for tpu.

Another potential problem is that ML engine seems to require outer dimension to be unknown to enable batching , while the t5 batch size is set to a fixed integer.

Last but not the least, the savedModel works great on local machine, but it broke after I tried to quantize it by following the instructions on link.

So, my questions are:

  1. Is there an option to manage the tags and dimension in the exported SavedModel to support ML engine?
  2. Could you provide suggestions on t5 model quantization for serving?

Your help will be highly appreciated. Regards.

adarob commented 4 years ago

Thanks for sharing these details. Unfortunately as researchers, we don't have time to test all of these applications so it's very helpful for users like yourself to both report and help us fix bugs like this.

Adding @toponado who may be able to help with the tpu tag issue. I wonder if there is a way to remove the tag after export?

I actually don't believe the dimensionality is an issue. The input placeholder is actually shaped [None] and the batch is automatically padded as long as it is no bigger than what you set as batch_size during export. We can come back and address this if it turns out to be a problem.

adarob commented 4 years ago

I just took a quick look and it looks like we just need to disable export_to_tpu in the TPUEstimator constructor: https://github.com/tensorflow/estimator/blob/08dd1e6ca94248691dfe00aadafc65e8b875e44f/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py#L2665

However, this appears to be already happening here: https://github.com/tensorflow/mesh/blob/4c656e2f249bcacbda684c4ff0a860b86f372a2c/mesh_tensorflow/transformer/utils.py#L1082

I'm not sure why it is still being exported with TPU...

adarob commented 4 years ago

Ah, it's because we call export_estimator_savedmodel (https://github.com/tensorflow/estimator/blob/08dd1e6ca94248691dfe00aadafc65e8b875e44f/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py#L4219), which creates a new TPUEstimator with export_to_tpu set to True by default. Should be fairly easy to fix. I'll check it out tomorrow.