Open buddypia opened 2 days ago
The LLM model is not available in your self-hosted Dify setup because the Dify container cannot access the LocalAI endpoint using localhost
. You need to use the host IP address instead.
To resolve the unauthorized HTTP request to the OpenAI API endpoint, ensure that the Dify container can access the LocalAI endpoint by using the host IP address. Here are the steps to configure it correctly:
Identify the Host IP Address: Determine the IP address of the host machine where LocalAI is running. This can be a LAN IP address, such as 192.168.1.100
.
Update the Server URL in Dify Settings:
Settings > Model Providers > LocalAI
.ggml-gpt4all-j
):
gpt-3.5-turbo
http://<your-LocalAI-endpoint-domain>:8080
(e.g., http://192.168.1.100:8080
)all-MiniLM-L6-v2
):
text-embedding-ada-002
http://<your-LocalAI-endpoint-domain>:8080
(e.g., http://192.168.1.100:8080
)Save the Configuration: Click "Save" to apply the changes and use the models in the application.
Additionally, if you encounter an error such as "Validation failed: You exceeded your current quota, please check your plan and billing details," it indicates that your OpenAI key's account has run out of funds. Please visit OpenAI to recharge your account.
By ensuring that the Dify container uses the correct host IP address to access the LocalAI endpoint and verifying that your OpenAI key has sufficient funds, you should be able to resolve the unauthorized HTTP request issue [1][2][3].
To continue talking to Dosu, mention @dosu.
Self Checks
Dify version
0.6.12-fix1
Cloud or Self Hosted
Self Hosted (Docker)
Steps to reproduce
Docker Error
The API key is correct, but it has changed for some reason.
✔️ Expected Behavior
Response from OpenAI's gpt-4o model
❌ Actual Behavior
no response