Closed 19920716 closed 4 months ago
To resolve the "Credentials validation failed with status code 404" error when adding a self-deployed LLM model in a self-hosted instance of the Dify application, ensure the following:
Frontend and Backend Versions: Verify that both the frontend and backend versions are up-to-date and consistent with each other.
Model Deployment: Ensure that the model has been successfully deployed and that the credentials are correctly configured.
Credentials Configuration: Make sure the credentials include the correct endpoint_url
or server_url
and that they are properly formatted. Validate the credentials by pinging the model endpoint to ensure compatibility with OpenAI's API standard.
Ollama Service Accessibility: If using Ollama, ensure the service is accessible from the Dify application. If deploying via Docker, use the local network IP address or the Docker host machine IP address instead of localhost
. For example, use http://192.168.1.100:11434
or http://172.17.0.1:11434
instead of http://localhost:11434
.
LocalAI Integration:
$ git clone https://github.com/go-skynet/LocalAI
$ cd LocalAI/examples/langchain-chroma
$ wget https://huggingface.co/skeskinen/ggml/resolve/main/all-MiniLM-L6-v2/ggml-model-q4_0.bin -O models/bert
$ wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j
$ mv .env.example .env
$ docker-compose up -d --build
$ docker logs -f langchain-chroma-api-1
Settings > Model Providers > LocalAI
and fill in the details for the models.ggml-gpt4all-j
(Text Generation):gpt-3.5-turbo
http://127.0.0.1:8080
(or the host IP if using Docker)all-MiniLM-L6-v2
(Embeddings):text-embedding-ada-002
http://127.0.0.1:8080
(or the host IP if using Docker)For more detailed instructions, refer to the LocalAI integration guide [1][2][3][4][5].
To continue talking to Dosu, mention @dosu.
Self Checks
Dify version
github main branch
Cloud or Self Hosted
Self Hosted (Source)
Steps to reproduce
add self deploy llm model, add error
✔️ Expected Behavior
❌ Actual Behavior