ray-project / ray-llm

RayLLM - LLMs on Ray
https://aviary.anyscale.com
Apache License 2.0
1.2k stars 87 forks source link

How to adjust engine kwargs from defaults values for models in `./models/` #134

Closed SamComber closed 4 months ago

SamComber commented 4 months ago

I'm taking ray-llm out for a spin as we need to shard Mixtral-8x7B AWQ quantized across 2 GPU nodes for our use case (vLLM has been working great for us on a single node).

I can't see the model config for this model inside the directory so I began to put together a config, but it got me thinking, there is no size fits all for the vLLM engine_kwargs. A few of our deployed models rely on different kwargs here to optimise for throughput given particular LLM tasks.

My question is: is it possible override the defaults inside RayService definition ? It would seem extremely limiting if this were not the case, so I'm sure you can - but some confirmation would be great here before I start provisioning node groups and doing some pre-work. Thanks guys :)

deployment_config:
  autoscaling_config:
    min_replicas: 1
    initial_replicas: 1
    max_replicas: 100
    target_num_ongoing_requests_per_replica: 20
    metrics_interval_s: 10.0
    look_back_period_s: 30.0
    smoothing_factor: 0.6
    downscale_delay_s: 300.0
    upscale_delay_s: 15.0
  max_concurrent_queries: 192
  ray_actor_options:
    resources:
      accelerator_type_a100_80g_aws: 0.1
engine_config:
  model_id: TheBloke/Mixtral-8x7B-Instruct-v0.1-AWQ
  hf_model_id: TheBloke/Mixtral-8x7B-Instruct-v0.1-AWQ
  type: VLLMEngine
  engine_kwargs:
    quantization: awq
    trust_remote_code: true
    max_num_batched_tokens: 32768
    max_num_seqs: 192
    gpu_memory_utilization: 0.78
  max_total_tokens: 32768
  generation:
    prompt_format:
      system: "{instruction} + "
      assistant: "{instruction}</s> "
      trailing_assistant: ""
      user: "[INST] {system}{instruction} [/INST]"
      system_in_user: true
      default_system_message: "Always assist with care, respect, and truth. Respond with utmost utility yet securely. Avoid harmful, unethical, prejudiced, or negative content. Ensure replies promote fairness and positivity."
    stopping_sequences: []
scaling_config:
  num_workers: 8
  num_gpus_per_worker: 1
  num_cpus_per_worker: 8
  placement_strategy: "STRICT_PACK"
  resources_per_worker:
    accelerator_type_a100_80g_aws: 0.1