Closed antonio-veezoo closed 6 months ago
This is a fine-tuned model with very very high top probabilities (even 1.0, which seems strange), idk if OpenAI sends all top_logprobs in all cases, the comment at line 96 of ChatCompletionResponse.scala
suggests otherwise, so my issue might be related to this.
Increasing temperature helps so I think my high probabilities are the issue here. Still strange that the format of the answer would change in that case, and that the library does not handle that.
Hi @antonio-veezoo ,
Thanks for reporting! Seems like a corner case - "bytes" : null
for "token" : "<|end|>"
. Will fix
Thanks for the quick fix! Do you have a timeline for when this would land on maven or alternatives for how to use a version with the fix right away?
When setting
top_logprobs
higher than Some(1) I get a parsing issue from the response.I'm using the init below for
CreateChatCompletionSettings
, it works just fine withtop_logprobs = Some(1)
ortop_logprobs = None
. Any idea of what might be the issue? Running version1.0.0.RC.1
Thanks for any help!