tensorflow / tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Apache License 2.0
15.5k stars 3.49k forks source link

How to restore a trained Transformer model to make predictions in Python? #1532

Open NielsRogge opened 5 years ago

NielsRogge commented 5 years ago

I trained the Transformer model on my own data by defining an own Problem class (called "sequence", which is a text2text problem). I used model=transformer and hparams=transformer_base_single_gpu. After data generation, training and decoding I successfully exported the model using t2t-exporter as I can see a saved_model.pbtxt file and a variables/ directory created in my export directory.

My question is: how can I now restore that trained model to make predictions on new sentences in Python? I'm working in Google Colab. I read that for text problems, the exported model expects the inputs to already be encoded as integers. How to do this?

I tried to work as in this notebook but I am not able to retrieve the Problem I defined earlier. When I run

from tensor2tensor import problems

# Fetch the problem
problem = problems.problem("sequence")

it throws an error stating that sequence not in the set of supported problems.

Thanks for any help!

Single430 commented 5 years ago

You can look at tensorflow-serving-api tensorflow_model_server --port=8500 --model_base_path=path/to/model

NEUdjp commented 5 years ago

You can look at tensorflow-serving-api tensorflow_model_server --port=8500 --model_base_path=path/to/model

Now decoding one sentence needs about 50 seconds, how can I decrease the time? Whether tensorflow-serving-api can solve my problem?

jurukode commented 5 years ago

Hi @NielsRogge,

If you're using your own problem, you need to import it first using following code

from tensor2tensor.utils import usr_dir
from tensor2tensor import problems

usr_dir.import_usr_dir(<YOUR_PROBLEM_MODULE_PATH>)

# Then you can fetch the problem
problem = problems.problem("sequence")

Hope it can helps!

amirakazadeh commented 5 years ago

Hi how did you define your own model with your own dataset?? can you please share your model defining's code?

tnx

jurukode commented 5 years ago

@amirakazadeh you can check here https://github.com/tensorflow/tensor2tensor#adding-your-own-components

echan00 commented 5 years ago

I have loaded my model via:

model = tf.saved_model.load_v2(MODEL_DIR)

How would I make predictions with it in Python? (without tensorflow serving as this will be on a custom prediction routing on Google AI Platform)

EDIT: It looks like this is the way to do it:

from tensorflow.contrib import predictor
predict_fn = predictor.from_saved_model(MODEL_DIR)

from https://stackoverflow.com/questions/45900653/tensorflow-how-to-predict-from-a-savedmodel