Closed onlyfish79 closed 1 month ago
Hey @onlyfish79! 👋 I'm here to assist you with bugs, questions, and becoming a contributor. Let's squash those bugs together!
The error you're encountering in version 0.6.15 is likely due to changes in how the model_name
parameter is handled or validated. To resolve this issue in your self-hosted Docker environment, follow these steps:
Check the model_name
Configuration:
Ensure that the model_name
is correctly set in the Dify settings under Settings > Model Providers > Ollama
. For example, if you are using the llava
model, the configuration should look like this:
- Model Name: `llava`
- Base URL: `http://<your-ollama-endpoint-domain>:11434`
- Model Type: `Chat`
- Model Context Length: `4096`
- Maximum Token Limit: `4096`
- Support for Vision: `Yes`
Expose Ollama Service to the Network:
If you are running Dify and Ollama in Docker, ensure that the Ollama service is accessible from the Dify container. You may need to replace localhost
with host.docker.internal
or the appropriate network IP address. For example:
Base URL: `http://host.docker.internal:11434`
Set Environment Variables:
Depending on your operating system, set the OLLAMA_HOST
environment variable to expose the Ollama service on the network.
On macOS:
launchctl setenv OLLAMA_HOST "0.0.0.0"
On Linux: Edit the systemd service:
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
Reload systemd and restart Ollama:
systemctl daemon-reload
systemctl restart ollama
On Windows:
Edit system environment variables and set OLLAMA_HOST
to 0.0.0.0
.
By ensuring the correct configuration and network accessibility, you should be able to resolve the model_name
error in version 0.6.15 of Dify [1].
To continue talking to Dosu, mention @dosu.
I'm sure the model configuration is correct.
In invoke tool(workflow), any model of the following: ['claude-3.5-sonnet', 'gpt-4o', 'gemini-1.5-pro'], can return the correct result. (both 0.6.15 and 0.6.14)
Between 0.6.14 and 0.6.15, all configurations are the same. No change.
Your reply doesn't solve my problem.
To resolve the issue with the model_name
parameter in version 0.6.15 of Dify, ensure that the model_name
is set to one of the valid formats listed in the _position.yaml
file. The correct format should be one of the following:
- claude-3-5-sonnet-20240620
- claude-3-haiku-20240307
- claude-3-opus-20240229
- claude-3-sonnet-20240229
- claude-2.1
- claude-instant-1.2
- claude-2
- claude-instant-1
For example, you can set the model_name
parameter to "claude-3-sonnet-20240229"
instead of the incorrect format ['1721820680007', 'item']
[1].
Additionally, ensure that your database schema is updated according to the migration script 4e99a8df00ff_add_load_balancing.py
, which includes changes to the provider_name
column and the creation of new tables and indexes [2]. This might affect how the model_name
parameter is handled in version 0.6.15.
To continue talking to Dosu, mention @dosu.
Hello please stay tuned on this PR. https://github.com/langgenius/dify/pull/6635 :) Already working on it.
Self Checks
Dify version
0.6.15
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Workflow
version 0.6.14 no problem; 0.6.15 report error.
Iterator input: models list: ['claude-3.5-sonnet', 'gpt-4o', 'gemini-1.5-pro']
Iterator invoke tool, tool requres input model list: ['claude-3.5-sonnet', 'gpt-4o', 'gemini-1.5-pro']
0.6.14 vs 0.6.15 excute result
0.6.15 invoke tool model_name format: ['1721820680007', 'item'], report error:
Failed to invoke tool: model_name in input form must be one of the following: ['claude-3.5-sonnet', 'gpt-4o', 'gemini-1.5-pro']
0.6.14 invoke tool model_name format: "claude-3.5-sonnet" (right)
✔️ Expected Behavior
No response
❌ Actual Behavior
No response