SciSharp / BotSharp

The AI Agent Framework in .NET
https://botsharp.readthedocs.io
Apache License 2.0
2.16k stars 448 forks source link

LLM Load balance #252

Open Oceania2018 opened 8 months ago

Oceania2018 commented 8 months ago

Inspired by this load balancing idea. Load balance allow across multiple models, providers, and keys, avoid toke limitation.

ishaan-jaff commented 1 month ago

Hi @Oceania2018 I'm the maintainer of LiteLLM - we provide an Open source proxy for load balancing Azure + OpenAI + Bedrock + Vertex - 100+ LLMs

It can process (500+ requests/second)

From this thread it looks like you're trying to maximize throughput - I hope our solution makes it easier for you. (i'd love feedback if you're trying to do this)

Here's the quick start on using LiteLLM Proxy for load balancing

Doc: https://docs.litellm.ai/docs/proxy/reliability

Step 1 Create a Config.yaml

model_list:
  - model_name: gpt-4
    litellm_params:
      model: azure/chatgpt-v-2
      api_base: https://openai-gpt-4-test-v-1.openai.azure.com/
      api_version: "2023-05-15"
      api_key: 
  - model_name: gpt-4
    litellm_params:
      model: azure/gpt-4
      api_key: 
      api_base: https://openai-gpt-4-test-v-2.openai.azure.com/
  - model_name: gpt-4
    litellm_params:
      model: azure/gpt-4
      api_key: 
      api_base: https://openai-gpt-4-test-v-2.openai.azure.com/

Step 2: Start the litellm proxy:

litellm --config /path/to/config.yaml

Step3 Make Request to LiteLLM proxy:

curl --location 'http://0.0.0.0:8000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
      "model": "gpt-4",
      "messages": [
        {
          "role": "user",
          "content": "what llm are you"
        }
      ],
    }
'