TensorOpsAI / LLMstudio

Framework to bring LLM applications to production
https://tensorops.ai
Mozilla Public License 2.0
247 stars 25 forks source link

FEAT: Adapt to Rate Limit instead of Failure #37

Open MiNeves00 opened 10 months ago

MiNeves00 commented 10 months ago

Feature Request

Providers like OpenAI have some rate limits (things like a limit in the requests per minute). This feature would allow llm studio to wait it out (or keep trying) when necessary so that the response does not error even if it takes longer.

Advanced feature: By being aware of the exact rate limit of the user (depends on their tier in OpenAI for example) it could also decide which prompts to send at what time in order to maximize the rate limit without overstepping it (in cases where these are parallel).

Motivation

Gives more robustness to LLM calls. The user does not need to worry about their application breaking when making to many requests per minute.

Your contribution

Discussion

ishaan-jaff commented 10 months ago

Hi @MiNeves00 i'm the maintainer of LiteLLM and we allow you to maximize throughput + throttle requests by load balancing between multiple LLM endpoints.

I thought it would be helpful for your use case, I'd love feedback if not

Here's the quick start, to use LiteLLM load balancer (works with 100+ LLMs) doc: https://docs.litellm.ai/docs/simple_proxy#model-alias

Step 1 Create a Config.yaml

model_list:
- model_name: openhermes
  litellm_params:
      model: openhermes
      temperature: 0.6
      max_tokens: 400
      custom_llm_provider: "openai"
      api_base: http://192.168.1.23:8000/v1
- model_name: openhermes
  litellm_params:
      model: openhermes
      custom_llm_provider: "openai"
      api_base: http://192.168.1.23:8001/v1
- model_name: openhermes
  litellm_params:
      model: openhermes
      custom_llm_provider: "openai"
      frequency_penalty : 0.6
      api_base: http://192.168.1.23:8010/v1

Step 2: Start the litellm proxy:

litellm --config /path/to/config.yaml

Step3 Make Request to LiteLLM proxy:

curl --location 'http://0.0.0.0:8000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
      "model": "openhermes",
      "messages": [
        {
          "role": "user",
          "content": "what llm are you"
        }
      ],
    }
'
MiNeves00 commented 9 months ago

Hey @ishaan-jaff thanks for the info! That case of load balancing between different endpoints might end up spinning into an issue of its own. In that case we will be sure to take a look at and explore LiteLLM, seems pretty simple to test it out.

Although for now the focus of this issue is more for a use case where the user wants a specific provider. The user does not want failure due to rate limiting but also wants to maximize the rate used.

ishaan-jaff commented 9 months ago

@MiNeves00 our router should allow you to maximize your throughput from your rate limits

https://docs.litellm.ai/docs/routing

happy to make a PR on this

MiNeves00 commented 9 months ago

@ishaan-jaff I appreciate your availability for a PR. However, I just read the documentation again but I understood that you can maximize throughput by routing between several models.

If it is with just one model and one provider I only found the Cooldown function in LiteLLM to be useful for this scenario.

Although it seems to behave in a naive manner, being that when it hits the cooldown error limit per minute it cools down for a whole minute while it might not have needed to cool down for such a long time. Am I interpreting it right? Cooldowns - Set the limit for how many calls a model is allowed to fail in a minute, before being cooled down for a minute.