Open SimonBenhamou opened 4 months ago
Any update on this?
In the method generate_tokens
, il y a une option return_log_prob
. It is false
by default. You can activate it and get the log_prob
in the result by result.log_prob
@minhthuc2502 I want the distribution over the whole vocabulary (or at least top K). With your solution I only get the log prob of the token generated by the model...
This distribution must be computed though to generate the token, that's why I guess it should be "easy" to expose it...
Hello,
Unless I'm mistaken, I don't see any option in the translator translate_batch or generate_tokens functions to output the logits/probabilities of the generated tokens. However, this computation must be done internally before sampling the tokens, right ?
This would be very useful in order to have more control over the generation, for example for influencing the choice of specific tokens. Is there a particular reason why this is not exposed ? Could this be easily added, or am I missing something? This could for example take the form of a return_logits argument to the mentioned functions.
Thanks, Simon