langgenius / dify

Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
https://dify.ai
Other
50.22k stars 7.2k forks source link

how to set proxy #3079

Closed wangshuai-wuhan closed 7 months ago

wangshuai-wuhan commented 7 months ago

Self Checks

Provide a description of the help you need

i want to set proxy for openai ,how can i do

dosubot[bot] commented 7 months ago

Hey there, @wangshuai-wuhan! Long time no see in the digital realm of issues and code 🚀

To configure a proxy for OpenAI within the Dify platform, you can use LocalAI as a proxy. Here are the steps you need to follow:

  1. Ensure LocalAI is deployed and running. If you need help with this, you can refer to the LocalAI deployment guide.

  2. Once LocalAI is running, go to Settings > Model Providers > LocalAI in the Dify platform.

  3. For each model you wish to use through LocalAI, fill in the details as follows:

    • Model 1 (Text Generation Model):

      • Model Type: Text Generation
      • Model Name: gpt-3.5-turbo (or the custom name you configured in models/gpt-3.5-turbo.yaml)
      • Server URL: Use http://127.0.0.1:8080 if Dify is running locally and not in a Docker container. If Dify is deployed via Docker, use the host domain or LAN IP address, e.g., http://192.168.1.100:8080.
    • Model 2 (Embeddings Model):

      • Model Type: Embeddings
      • Model Name: text-embedding-ada-002 (or the custom name you configured in models/embeddings.yaml)
      • Server URL: Follow the same guidelines as for Model 1 regarding the Server URL.
  4. Click "Save" for each model configuration to apply the settings.

By following these steps, you configure Dify to use LocalAI as a proxy for OpenAI, allowing you to perform local model inferencing with the specified models.

If you need more detailed instructions on how to deploy and run LocalAI in the Dify platform, you can refer to the LocalAI deployment guide.

As for the specific requirements for the proxy server to be used with OpenAI within the Dify platform, I wasn't able to find an answer within the repository. However, as long as the proxy server can handle the traffic between Dify and OpenAI and doesn't block any necessary ports or protocols, it should work fine.

I hope this helps! If you have any more questions, feel free to ask.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
leslie2046 commented 7 months ago

@wangshuai-wuhan just add 3 lines in your docker-compose.yaml or like below: 图片

xyb commented 1 month ago

@wangshuai-wuhan just add 3 lines in your docker-compose.yaml or like below: 图片

Add sandbox to NO_PROXY to avoid this error:

HTTP Request: POST http://sandbox:8194/v1/sandbox/run "HTTP/1.1 502 Bad Gateway