Open ShahNewazKhan opened 6 years ago
@ShahNewazKhan I encountered the same problem. Have you solved it?
@rgwt123 I used t2t_query
with the appropriate modifications as described in my solution here https://github.com/tensorflow/tensor2tensor/issues/868#issuecomment-399042915
@ShahNewazKhan I follow your example, and get stable output depending on the targets in _make_example. Does it mean if I want to get the correct translation I need to run a loop and do beam search myself?(using transformer) Can you show me how you get the correct translation?
@rgwt123 I am sorry I am not familiar with the language translation
properties of the transformer
model. The solution I pointed towards is for scoring sentiment on documents as set out in the sentiment_imdb
problem.
For sentiment_imdb
scoring, you do not need to pass in the targets as the query.py
script is reusing the input method from the training interface.
Description
I exported a trained
transformer
model for thesentiment_imdb
problem with the followingt2t-exporter \ --model=transformer \ --hparams_set=transformer_small \ --problem=sentiment_imdb \ --data_dir= \
--output_dir=
I then started the tensorflow_model_server as such
I ensured
default_serving
signature is present in the exported modelsaved_model.pbtxt
as suchHowever I get an input parsing error when pinging the REST API as such:
...
Environment information
For bugs: reproduction and error logs