Closed imkebe closed 1 month ago
@imkebe Hello there! I'm here to assist you with any bugs, questions, or even becoming a contributor. Let's work together to solve the issue you're facing.
To resolve the issue where the custom model "gpt-3.5" is not found when connecting to an OpenAI-compatible API using a LiteLLM Proxy in a self-hosted Dify deployment, ensure that the model is correctly defined and registered in your configuration. The model should be referred to as "gpt-3.5-turbo-0125" rather than "gpt-3.5".
Here is the relevant configuration snippet:
model: gpt-3.5-turbo-0125
label:
zh_Hans: gpt-3.5-turbo-0125
en_US: gpt-3.5-turbo-0125
model_type: llm
features:
- multi-tool-call
- agent-thought
- stream-tool-call
model_properties:
mode: chat
context_size: 16385
parameter_rules:
- name: temperature
use_template: temperature
- name: top_p
use_template: top_p
- name: presence_penalty
use_template: presence_penalty
- name: frequency_penalty
use_template: frequency_penalty
- name: max_tokens
use_template: max_tokens
default: 512
min: 1
max: 4096
- name: response_format
label:
zh_Hans: 回复格式
en_US: response_format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
pricing:
input: '0.0005'
output: '0.0015'
unit: '0.001'
currency: USD
Ensure that your API requests and configurations are using "gpt-3.5-turbo-0125" as the model identifier. If you encounter any issues with API keys or model quotas, refer to the Dify platform's settings and ensure that your model provider configurations are correctly set up [1][2][3][4].
To continue talking to Dosu, mention @dosu.
I know i'm replaying to bot, however the model name in liteLLM can be custom and without of any version of gpt-3.5 exposed. I've tried with multiple model names the LiteLLM is exposing and other services are using but only Dify is in responce showing an error that the gpt-3.5 not found.
To resolve the issue where Dify returns an error stating that the base model "gpt-3.5" is not found when trying to add a custom model via a LiteLLM Proxy, ensure that the model name in Dify matches the external access name configured in LiteLLM.
Here are the steps to follow:
Ensure LiteLLM is properly configured and running:
http://127.0.0.1:8080
or the appropriate host IP if using Docker.Verify Model Configuration in LiteLLM:
custom-model-name
, ensure it is correctly set up in LiteLLM.Configure Dify to use the correct model name and server URL:
Settings > Model Providers > LocalAI
in Dify.custom-model-name
(this should match the external access name configured in LiteLLM)localhost
. For example, http://192.168.1.100:8080
.Here is a summary of the configuration:
Model 1: `custom-model-name`
- Model Type: Text Generation
- Model Name: `custom-model-name`
- Server URL: http://<your-LiteLLM-endpoint-domain>:8080 (e.g., http://192.168.1.100:8080)
Make sure to click "Save" to apply the configuration.
By ensuring that the model name in Dify matches the external access name configured in LiteLLM and using the correct server URL, you should be able to resolve the error [1][2][3].
To continue talking to Dosu, mention @dosu.
LocalAI cannot be used as a Dify backend because there is no field to put the access key. However all mentioned points are correct. The issue seems to be a default unnecessary dependency of gpt-3.5 model in response of OpenAI endpoint.
Hello we do not have 2.19.4
this version. Can you edit that again?
Hello we do not have
2.19.4
this version. Can you edit that again?
It's 0.6.15
The newly launched GitHub Models also encountered the same situation, as GitHub Models do not provide the GPT-3.5 model, making it unusable. Since OpenAI announced that 3.5 will be phased out, it is recommended to change the default verification model to GPT-4o-mini.
We're still facing this issue now.
We're still facing this issue now.
Fixed today #9387
Self Checks
Dify version
0.6.15
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
It's an regresion of #4460 or a variant with custom model.
Just add a OpenAI configuration or model.
Model Name: custom-model-name
API Key: sk-custom-one
Org: Ecma Org
API Base: http://your.litellm.hostname:4000
Hostname is accesible from the container. Other services are using the same LiteLLM Proxy without issue
✔️ Expected Behavior
While LiteLLM is proivding compatible Proxy with custom model list it should be able to add a custom model.
❌ Actual Behavior
Base model custom-model-name not found.