tensorflow / tensor2tensor

Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
Apache License 2.0
15.53k stars 3.5k forks source link

logprob scores when decoding from file, with return_beams=True #459

Open thompsonb opened 6 years ago

thompsonb commented 6 years ago

I am trying to generate an nbest list with logprob scores. I can get both the top N beams and their scores by running t2t-decoder in interactive mode and specifying something like --decode_hparams="beam_size=4,alpha=.6,batch_size=64,return_beams=True"

When I decode from a file using the same decode_hparams, I am able to get multiple tab separated beams, but I cannot figure out how to output/access the scores. Does anyone know how to output scores when decoding from a file? Thanks!

qcl6355 commented 6 years ago

Probably, I think you should hack the code in utils/decoding.py. The output scores are stored in result["scores"], you need to customize output style format.

rsepassi commented 6 years ago

Sure, what I've cooked up is a new decode_hp, write_beam_scores which will write out the scores to the file. It'll work with both decode_from_file and decode_from_dataset when you're writing out to a file. Let me know how it goes once it comes out in the next release.

ndvbd commented 6 years ago

@rsepassi There is some strange here, performance wise.

When I export the model using tensor2tensor.serving.export with --decode_hparams="return_beams=True", the prediction time is x5 slower than when I predict with a model that was exported without this hparam.

This is strange for me because even when return_beams=False, we still get the 4 scores and the top beam. The only difference with return_beams=True is that we get the top 4 beams instead of only the top one. But should it really take more time? After all, during the beam search decoding you are holding the 4 top beams anyhow, so you only need to output them, but there is no further calculation to be done, isn't it?