langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
36.38k stars 4.92k forks source link

LLM model is not available #5940

Open buddypia opened 2 days ago

buddypia commented 2 days ago

Self Checks

Dify version

0.6.12-fix1

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

  1. Generate a chatbot
  2. Select gpt-4o model
  3. say 'hi'

Docker Error

スクリーンショット 2024-07-03 22 10 15

The API key is correct, but it has changed for some reason.

スクリーンショット 2024-07-03 22 10 35

✔️ Expected Behavior

Response from OpenAI's gpt-4o model

❌ Actual Behavior

no response

dosubot[bot] commented 2 days ago

The LLM model is not available in your self-hosted Dify setup because the Dify container cannot access the LocalAI endpoint using localhost. You need to use the host IP address instead.

To resolve the unauthorized HTTP request to the OpenAI API endpoint, ensure that the Dify container can access the LocalAI endpoint by using the host IP address. Here are the steps to configure it correctly:

  1. Identify the Host IP Address: Determine the IP address of the host machine where LocalAI is running. This can be a LAN IP address, such as 192.168.1.100.

  2. Update the Server URL in Dify Settings:

    • Go to Settings > Model Providers > LocalAI.
    • For the LLM Model (ggml-gpt4all-j):
      • Model Type: Text Generation
      • Model Name: gpt-3.5-turbo
      • Server URL: http://<your-LocalAI-endpoint-domain>:8080 (e.g., http://192.168.1.100:8080)
    • For the Embedding Model (all-MiniLM-L6-v2):
      • Model Type: Embeddings
      • Model Name: text-embedding-ada-002
      • Server URL: http://<your-LocalAI-endpoint-domain>:8080 (e.g., http://192.168.1.100:8080)
  3. Save the Configuration: Click "Save" to apply the changes and use the models in the application.

Additionally, if you encounter an error such as "Validation failed: You exceeded your current quota, please check your plan and billing details," it indicates that your OpenAI key's account has run out of funds. Please visit OpenAI to recharge your account.

By ensuring that the Dify container uses the correct host IP address to access the LocalAI endpoint and verifying that your OpenAI key has sufficient funds, you should be able to resolve the unauthorized HTTP request issue [1][2][3].

To continue talking to Dosu, mention @dosu.