ray-project / ray-llm

RayLLM - LLMs on Ray
https://aviary.anyscale.com
Apache License 2.0
1.2k stars 87 forks source link

Autoscaling support in Ray-llm #133

Open Jeffwan opened 4 months ago

Jeffwan commented 4 months ago

Just curious does ray-llm fully leverage ray serve autoscaling (https://docs.ray.io/en/latest/serve/autoscaling-guide.html)? Seems ray serve only support target_num_ongoing_requests_per_replica and max_concurrent_queries, As we know, LLM output varies and these are not good for LLM scenarios. how do you achieve better autoscaling support for LLM?