Closed aalok-sathe closed 8 months ago
Reopening, as reported by OpenAI on X, log probs seem to be available again, so it would be nice to re-build and test the (existing) support for this in our package. Thanks @jennhu!
@jennhu do you have a MWE for this with the new API? I just haven't used openai models in a while so if you have something that already works I wouldn't have to start over:)
if you have the bandwidth to PR even better but if not I'll take anything to start off of!
After poking around some more with the chat models, I don't think we can get surprisals after all :(
The logprobs
parameter lets us get the logprobs corresponding to the model's generated output, but without the echo
parameter (as was previously supported by the legacy Completion models), we can't get the logprobs assigned to an arbitrary input string. It's possible I might be missing something, though!
I've already added a deprecation warning in README as of https://github.com/aalok-sathe/surprisal/commit/a926f9b; the API should continue working for older models supporting this endpoint (though I haven't tested it recently), but otherwise the broader issue about adding support is going to be a wontfix
. Alternatives to OpenAI models include: MPT, Mistral, Llama, Falcon (not an exhaustive list or endorsement).
OpenAI no longer provides log probs for the prompt, making it impossible to use as a probability over a string function. It does, however, continue to provide logprobs over its own completions.
E.g.:
Originally reported on twitter.