Closed xiami2019 closed 11 months ago
@xiami2019 Same question.
I have found that they actually use OpenAI's api to get the probability. We can found some details in the file called openai_api.py. And we can get the probability on line 819, where
generations, probs, retrievals, traces = qagent.prompt(prompts, api_key=key)
Hope it helps :)
Sorry for the late reply and thanks @jacksonchen1998 for the pointers!
To be more specific, completion models (such as text-davinci-003
which is the model used in the paper) returns probabilities of generated tokens if you specify the logprobs parameter in the request.
However, chat models (such as gpt-3.5-turbo
) cannot return probabilities.
Hi, nice work! I have a question about how to get a token's probability for confidence-based active retrieval. Can it be obtained from OpenAI's api? Or do we need another white-box model to calculate this probability?