Open michael-smt opened 1 month ago
There is https://github.com/WeblateOrg/weblate/pull/11467 which is not yet ready for merging.
[👋 LiteLLM CTO] Hi @michael-smt , thanks for using LiteLLM. Can we hop on a quick call sometime this week? I’d love to see if we can help with this issue on LiteLLM Side + learn how we can improve litellm for you
My cal for your convenience: https://calendly.com/d/4mp-gd3-k5k/berriai-1-1-onboarding-litellm-hosted-version?month=2023-10 My linkedin if you prefer DMs: https://www.linkedin.com/in/reffajnaahsi/
Describe the problem
I would like to integrate self-hosted LLMs which are proxied by litellm which provides a API in OpenAI format.
The API endpoint URL is not configurable in the OpenAI machinery. Currently this can be worked around by setting the environment variable
OPENAI_BASE_URL
to the custom API endpoint which applies globally.But the models names which are offered by litellm via the API do not match the hardcoded common OpenAI model names available in the Weblate machinery settings dropdown. This results in the error "Could not fetch translation: Unsupported model: \<model>".
Describe the solution you would like
It would be great if the OpenAI machinery could: 1) provide the option to optionally override the default endpoint 2) populate the model dropdown from the API. This would also remove the need to hardcode the model choices in
OpenAIMachineryForm
.Describe alternatives you have considered
Add another machinery for litellm: there are probably a lot of other OpenAI compatible services, so to it should be as generic as possible to keep the amount of machineries in check.
Use the litellm python package instead of the openapi package to broaden LLM support: Probably more prompt customization options would be needed plus it would make Weblate dependant on a package from a VC funded startup in the seed phase.
Screenshots
No response
Additional context
The relevant litellm proxy config looks like this: