hyperonym / basaran

Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
MIT License
1.29k stars 79 forks source link

Strong need for multiple `models` in a single deployment #263

Open KastanDay opened 1 year ago

KastanDay commented 1 year ago

As mentioned in #179, users need multiple models. On a multi-GPU on-prem machine, I want to write a config file that's like:

CUDA_VISIBLE_DEVICES=0     MODEL=meta-llama/Llama-2-7b-chat-hf
CUDA_VISIBLE_DEVICES=1,2,3 MODEL=meta-llama/Llama-2-13b-chat-hf

Then users should be able to specify "model": "<either_model>", in their requests.

I can start a PR if you want this feature. Let me know if you have any suggestions on the best way to load these models and keep them mostly separate from each other.

peakji commented 1 year ago

Hi @KastanDay! I would suggest to implement a routing service externally, which can decide which backend service/process to call based on the model parameter. This not only ensures the isolation of model deployment, but also allows for load balancing of the same model replicated across multiple machines.

KastanDay commented 1 year ago

Thank you!

Do you have any suggestions on an easy routing system? Something short and sweet? I'm an experienced backend programmer, but I've not done much with load balancing // reverse proxies. Thanks again!

Edit: In particular, I want to respect the model parameter. How can I intercept the request, pull out the model parameter and forward it to the proper local server (where each server has a unique port on localhost).

KastanDay commented 1 year ago

Answering my own question, I suppose NGIX or Traefik would work well.

Here's what GPT-4 said, just pretend backend == model parameter.

You can configure Traefik to route requests based on query parameters using its Query rule. Here's a basic example using Docker Compose and Traefik to route HTTP requests based on a query parameter named backend.

Docker Compose File (docker-compose.yml)

Here, we define two backend services (backend1 and backend2) and a Traefik service to act as the router.

version: '3.7'

services:
  traefik:
    image: traefik:v2.4
    ports:
      - "80:80"
    volumes:
      - "./traefik.yml:/etc/traefik/traefik.yml"

  backend1:
    image: nginx:alpine
    labels:
      - "traefik.http.routers.backend1.rule=Host(`localhost`) && Query(`backend=backend1`)"

  backend2:
    image: nginx:alpine
    labels:
      - "traefik.http.routers.backend2.rule=Host(`localhost`) && Query(`backend=backend2`)"

Traefik Configuration File (traefik.yml)

This file sets up Traefik and tells it to look for configurations in Docker labels.

api:
  dashboard: true

providers:
  docker:
    exposedByDefault: false

To bring up the services:

  1. Save the Docker Compose content to docker-compose.yml.
  2. Save the Traefik configuration to traefik.yml.
  3. Run docker-compose up.

Usage

After running the Docker Compose, you can route your requests by including the backend query parameter:

Traefik will route the request to the appropriate backend based on the query parameter.