Closed guyeyal closed 3 years ago
I guess similar to output_attentions
and output_hidden_states
, we could output the scores / probabilities
for generation, but I'm really not sure if it is required that often. What do you think @sshleifer @yjernite ?
I would suggest trying it on a branch and seeing if it produces better generations. I have been inspecting the scores this week (just by saving hypotheses to disk) and have not gotten much utility. If it helps produce better generations, however, we should obviously add this!
Dear @patrickvonplaten and @sshleifer, Thanks for the quick reply. I'm interested in the perplexity of my generated text as function of different generated methods. This can done using the probabilities of the output tokens. Another interesting case that jumps to mind is the case of auto complete, where you wanna present the user a generated text only if it passes some threshold of confidence.
Those are actually very useful applications! We will soon have a bigger refactoring of the generate method I think and will hopefully include this.
As @sshleifer said, for now, it would be great if you can show how you would integrate it on a branch including some interesting results.
Fantastic. Will do.
Thanks for raising the issue @guyeyal. IT would definitely be helpful to have a running example.
More generally @patrickvonplaten I think this is functionality will be helpful for the line of research concerned with analyzing the role of preplexity as a training objective as well as work on re-ranking generations or using stuff like noisy channel modeling, so definitely think it should be in the next big refactor.
https://arxiv.org/abs/1904.09751 https://arxiv.org/abs/1908.05731
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I'll add a vote here that I'm interested in this too. I wrote some code locally very similar to guyeyal's.
Thanks for the great work!
I would also be interested in this functionality. I am using an autoregressive transformer model as part of a reinforcement learning problem. To alleviate the sample inefficiency of RL, it is very attractive to generate data using beam search, in order to add num_beams > 1
of data to a buffer per time step. I would then like to bias the sampling of data from this buffer according to the probability of the generated sequence, defined like the diagram in this example:
https://huggingface.co/blog/how-to-generate#beam-search
@patrickvonplaten is this something that is likely to be covered in the PR here: https://github.com/huggingface/transformers/pull/6949 or is it better to open a new issue? Thanks!
There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!
There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!
I saw a PR here, but not committed. #6289
There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!
I saw a PR here, but not committed. #6289
Any idea why this wasn't commited?
There seems to be a lot of interest in this functionality! If someone feels like opening a PR that would be great!
I saw a PR here, but not committed. #6289
Any idea why this wasn't commited?
Wasn't quite working for me properly when I tried it. I did a fix locally based on 3.4.0, but the big refactor of generation_utils in 3.5.x broke it entirely again. Would be better to start afresh at this point, I think with all the changes to that file.
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Is this feature available? Or still in the works?
I am using the tensor flow implementation of T5. When I use model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=5, return_dict_in_generate=True)
, it returns a tensor of shape (1,5,vocab_size)
. Essentially, it is giving me the probability of a single word for each beam. In the description for beam_search
in the link above, it says that score refers to "all processed lm head logits + the current beam_scores for each output token". In this link, https://huggingface.co/transformers/internal/generation_utils.html, it says that scores is "the prediction scores of the language modelling head, for each generation step". Ideally, we want a score for each token at every step of the generation for each beam search. So, wouldn't the shape of the output be (batch_size
,number_of_beams
,sequence_length
,vocab_size
)? That way, we can follow the path that each beam search went through to get the max probability sequence. In other words, for each beam, we have the probability of each token in our vocabulary for each generation step (until the max length).
I want to use these token probabilities to calculate the sequence probabilities. In the blog for generate()
(https://huggingface.co/blog/how-to-generate#beam-search), it shows that beam search looks for the highest product of probabilities between all sequences. "At time step 2, beam search finds that the word sequence ("The","dog","has"), has with 0.36 a higher probability than ("The","nice","woman"), which has 0.2". How can we get access to this sequence level probability, as show in this blog?
@patrickvonplaten, in a follow up to the post above, does the tensorflow implementation of model.generate()
produce either the sequence_scores
that is also available in the pytorch implementation? Or, somehow the scores
returns a tensor that is in the shape (batch_size,number_of_beams,sequence_length,vocab_size)
, where we can calculate the product of the token probabilities at each step in the beam_search for each sequence? Thanks for your help!
how can I get the gold sequence generate score?
I don't think we have the TF implementation of this function yet. Also cc @gante here
🚀 Feature request
Thanks for doing such an awesome work. i'm interested in the hypothesis score when running generate. This could be done per hypothesis, or preferably per token in the hypothesis.
Motivation
The motivation is to gain confidence for my generated text,
I suggest:
` best_scores = []
for _generate_no_beam_search: ` output_score = 0
In the next step we could save the score per token to allow the user to decide where he wants to truncate the generated text as function of confidence