Closed shuther closed 5 months ago
It's expected that after setting up Ollama with chat, completion, and embeddings models in the self-hosted version, you might encounter the message "The system model has not yet been fully configured, and some functions may be unavailable," and not be able to select them for the system reasoning model or for the embedding model. This usually happens if the integration with LocalAI hasn't been completed correctly or there are network configuration issues, particularly when deploying using Docker.
To resolve this, ensure the Dify container can correctly access the LocalAI endpoint. If you're deploying via Docker, avoid using localhost
or 127.0.0.1
for the LocalAI endpoint from the Dify container. Instead, use the host machine's IP address or a domain that resolves to the host where LocalAI is running, such as http://192.168.1.100:8080
.
Configure the Dify to connect to LocalAI by going to Settings > Model Providers > LocalAI
and entering the correct Server URL (http://<your-LocalAI-endpoint-domain>:8080
). Test the connection by using the models through Dify. If the connection is established correctly, Dify should be able to communicate with LocalAI and perform model inference without issues.
Adjusting the network settings to ensure connectivity between the Dify container and the LocalAI endpoint is crucial, especially in Docker environments. This involves using the host's IP address instead of localhost
to avoid connectivity issues within Docker.
To continue the conversation, mention @dosu.
When I switched to other tabs (integrations, Data source, ...) and came back, the system embedding model has been populated automatically and correctly. Still an issue for the reasoning model and a little bug to fix for the UI (so we don't have to switch between tabs)
Could you please provide some detailed screenshots to help us better understand the issue?
after a fresh install, configure ollama, then it is not possible to select any system model settings. When I moved to other tabs, it worked for the embeddings. After a restart, it picked up automatically Mistral. Hope it helps
Self Checks
Provide a description of requested docs changes
Using the self hosted version, I setup ollama with few models (chat, completion and embeddings); however, I still see the message: The system model has not yet been fully configured, and some functions may be unavailable. But I can't select them for the system reasoning model or for the embedding model. Is it expected?