If an api key is provided, Grist can use an LLM as an assistant for writing formulas. The LLM can be self-hosted or an external service. The default external service is OpenAI, but the default model currently specified is going away.
This commit freshens the default model used, if this feature is enabled, since the existing one is going away. Benchmarking suggests the results are generally better, though not dramatically so.
The feature of falling back on a longer context model is no longer as important, but is retained since it could be useful for self-hosters.
Context
If an api key is provided, Grist can use an LLM as an assistant for writing formulas. The LLM can be self-hosted or an external service. The default external service is OpenAI, but the default model currently specified is going away.
https://platform.openai.com/docs/deprecations/2023-11-06-chat-model-updates
Proposed solution
This commit freshens the default model used, if this feature is enabled, since the existing one is going away. Benchmarking suggests the results are generally better, though not dramatically so.
The feature of falling back on a longer context model is no longer as important, but is retained since it could be useful for self-hosters.
Related issues
https://github.com/gristlabs/grist-core/pull/345
Has this been tested?
addedupdated tests in the test suiteThere are existing benchmarking scripts in https://github.com/gristlabs/grist-core/tree/main/test/formula-dataset which I ran on this model. The Grist Labs SaaS is also already using this model.