langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
53.46k stars 7.81k forks source link

The system model has not yet been fully configured, and some functions may be unavailable. #3784

Closed shuther closed 5 months ago

shuther commented 7 months ago

Self Checks

Provide a description of requested docs changes

Using the self hosted version, I setup ollama with few models (chat, completion and embeddings); however, I still see the message: The system model has not yet been fully configured, and some functions may be unavailable. But I can't select them for the system reasoning model or for the embedding model. Is it expected?

dosubot[bot] commented 7 months ago

It's expected that after setting up Ollama with chat, completion, and embeddings models in the self-hosted version, you might encounter the message "The system model has not yet been fully configured, and some functions may be unavailable," and not be able to select them for the system reasoning model or for the embedding model. This usually happens if the integration with LocalAI hasn't been completed correctly or there are network configuration issues, particularly when deploying using Docker.

To resolve this, ensure the Dify container can correctly access the LocalAI endpoint. If you're deploying via Docker, avoid using localhost or 127.0.0.1 for the LocalAI endpoint from the Dify container. Instead, use the host machine's IP address or a domain that resolves to the host where LocalAI is running, such as http://192.168.1.100:8080.

Configure the Dify to connect to LocalAI by going to Settings > Model Providers > LocalAI and entering the correct Server URL (http://<your-LocalAI-endpoint-domain>:8080). Test the connection by using the models through Dify. If the connection is established correctly, Dify should be able to communicate with LocalAI and perform model inference without issues.

Adjusting the network settings to ensure connectivity between the Dify container and the LocalAI endpoint is crucial, especially in Docker environments. This involves using the host's IP address instead of localhost to avoid connectivity issues within Docker.

#### Details **Sources** - [why model provider emtry?](https://github.com/langgenius/dify/issues/982) - [en/tutorials/model-configuration/ollama.md](https://github.com/langgenius/dify-docs/blob/main/en/tutorials/model-configuration/ollama.md) - [en/tutorials/model-configuration/localai.md](https://github.com/langgenius/dify-docs/blob/main/en/tutorials/model-configuration/localai.md) - [zh_CN/guides/model-configuration/localai.md](https://github.com/langgenius/dify-docs/blob/main/zh_CN/guides/model-configuration/localai.md) - [api/tests/integration_tests/model_runtime/localai/test_llm.py](https://github.com/langgenius/dify/blob/main/api/tests/integration_tests/model_runtime/localai/test_llm.py) - [en/getting-started/install-self-hosted/install-faq.md](https://github.com/langgenius/dify-docs/blob/main/en/getting-started/install-self-hosted/install-faq.md) - [en/getting-started/install-self-hosted/environments.md](https://github.com/langgenius/dify-docs/blob/main/en/getting-started/install-self-hosted/environments.md)
**About Dosu** This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

shuther commented 7 months ago

When I switched to other tabs (integrations, Data source, ...) and came back, the system embedding model has been populated automatically and correctly. Still an issue for the reasoning model and a little bug to fix for the UI (so we don't have to switch between tabs)

takatost commented 7 months ago

Could you please provide some detailed screenshots to help us better understand the issue?

shuther commented 7 months ago

after a fresh install, configure ollama, then it is not possible to select any system model settings. When I moved to other tabs, it worked for the embeddings. After a restart, it picked up automatically Mistral. Hope it helps

Screenshot 2024-04-26 at 16 17 22