At the moment only 'well known' model providers can be configured automatically via env vars. To configure a custom provider, the admin account must set it up via the UI. This change makes it possible to configure a custom OpenAI compatible model provider (e.g. a HuggingFace model served via vLLM) entirely via the following env vars:
GEN_AI_DISPLAY_NAME: "My custom model name"
GEN_AI_MODEL_PROVIDER: custom
GEN_AI_LLM_PROVIDER_TYPE: openai
GEN_AI_MODEL_VERSION: model-org/model-name
GEN_AI_API_ENDPOINT: http://my-self-hosted-model.com
At the moment only 'well known' model providers can be configured automatically via env vars. To configure a custom provider, the admin account must set it up via the UI. This change makes it possible to configure a custom OpenAI compatible model provider (e.g. a HuggingFace model served via vLLM) entirely via the following env vars: