Closed zanderjiang closed 1 month ago
Likely related to https://github.com/outlines-dev/outlines/issues/609
To generate choices with models.openai
, Outlines tokenizes the choices
and sets the logit_bias
(token filtering) API argument such that there is ~0% chance of any other token being selected.
There are two issues here:
Upstream top_p
Handling: When top_p
is set, OpenAI appears to completely ignore the logit_bias
parameter.
top_p
is set and user is using OpenAI.Outlines Method: Outlines allows any token from any choice at any point. When I ran your reproduction script, because "jacket" tokenizes as ["j", "acket"]
, "j" is legal for every token. Therefore ChatGPT responded with "jj" in one case.
Describe the issue as clearly as possible:
I'm trying to use outlines with an OpenAI Compatible API. I also need to set some arguments to the LLM call. Right now, the program executes with no error, but it does not output anything and remains stuck. I suspect that there's an issue with the API call made, but I cannot see the API error so it is very difficult to identify the problem.
Steps/code to reproduce the bug:
Expected result:
Error message:
Outlines/Python version information:
Version information
Context for the issue:
No response