predibase / lorax

Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
https://loraexchange.ai
Apache License 2.0
2.04k stars 135 forks source link

Latency increase when run on multi-GPU #116

Open prd-tuong-nguyen opened 8 months ago

prd-tuong-nguyen commented 8 months ago

System Info

I run your docker image in 2 cases:

{
  "model_id": "Open-Orca/Mistral-7B-OpenOrca",
  "adapter_id": "",
  "source": "hub",
  "adapter_source": "hub",
  "revision": null,
  "validation_workers": 2,
  "sharded": true,
  "num_shard": 4,
  "quantize": "BitsandbytesNF4",
  "dtype": null,
  "trust_remote_code": false,
  "max_concurrent_requests": 128,
  "max_best_of": 1,
  "max_stop_sequences": 4,
  "max_input_length": 2048,
  "max_total_tokens": 4096,
  "waiting_served_ratio": 1.2,
  "max_batch_prefill_tokens": 4096,
  "max_batch_total_tokens": 100000,
  "max_waiting_tokens": 20,
  "max_active_adapters": 10,
  "adapter_cycle_time_s": 2,
  "hostname": "0.0.0.0",
  "port": 8000,
  "shard_uds_path": "/tmp/lorax-server",
  "master_addr": "localhost",
  "master_port": 29500,
  "huggingface_hub_cache": "/data",
  "weights_cache_override": null,
  "disable_custom_kernels": false,
  "cuda_memory_fraction": 1,
  "json_output": true,
  "otlp_endpoint": null,
  "cors_allow_origin": [],
  "watermark_gamma": null,
  "watermark_delta": null,
  "ngrok": false,
  "ngrok_authtoken": null,
  "ngrok_edge": null,
  "env": false,
  "download_only": false
}

Information

Tasks

Reproduction

Run dokcer with --sharded true --num_shard 4

Expected behavior

Same or better performace when run multi-gpu

tgaddair commented 8 months ago

Hey @prd-tuong-nguyen, what kind of networking do you have between these GPUs? If they're using PCIe, the frequent communication between devices will likely cause performance to degrade. To get good performance on multiple GPUs, you typically need NVLink.

My recommendation to try to use a single GPU if possible, and only use multi-GPU if you have to due to memory constraints (for example, serving a 70b param model with 40GB of VRAM in fp16).

prd-tuong-nguyen commented 8 months ago

@tgaddair Thanks for your reply, I want to take advance of multi-GPU to increase the concurrency. So do you have any solution for this issue? Like if I run each instance on a GPU, what is the best solution to load balancer between them?

tgaddair commented 8 months ago

Hey @prd-tuong-nguyen, in your case I would recommend using data parallelism rather than model parallelism. Specifically, I would run one replica per GPU then put a load balancer in front of them using something like Kubernetes.

As for the best load balancing strategy, if you do not have enough load to keep the GPUs fully utilized, or the number of adapters you're using is relatively low (<25 or so), then I would suggest using a round robin load balancing strategy. That will keep the replicas equally busy, which should help keep latency low.

If, however, you're operating at very high scale, I would suggest using a load balancer with a consistent hashing policy based on the adapter ID, so that you can more efficiently batch together requests for same adapter and maximize throughput.

prd-tuong-nguyen commented 8 months ago

@tgaddair Thank you for your suggestion, I will try it <3

bi1101 commented 6 months ago

Hi @prd-tuong-nguyen do you have some performance benchmark on running it with multiple GPUs setup in terms of though put?