Open jordantgh opened 1 year ago
I would definitely be interested in seeing the effect of this. Would this add any complications with LiteLLM and non-openai model / completions model usage?
LiteLLM supports function calling in models that support it. But yeah not every LLM that LiteLLM supports (or that you want to try) supports a function calling interface, so it'd have to be restricted to more limited experiments (mainly the GPT models, I think maybe some versions of Llama support it too but I haven't tried and doubt the syntax is guaranteed to be 1:1 with openai's).
hey @jordantgh @adamkarvonen - i'm the comaintainer of litellm. Any ideas for how we can support this better?
For GPT models the function calling API can be used to enforce structured output. The intended use is to reliably provide arguments for use in a downstream function call, but it is a good way of enforcing rigid structure on outputs for any purpose. This allows us to (theoretically) guarantee to return moves in proper notation with no extraneous text or separate parsing needed. I am also interested in testing whether it might cut through some of the RLHF with the non-instruct models that seems to be degrading their performance.