ray-project / ray-llm

RayLLM - LLMs on Ray
https://aviary.anyscale.com
Apache License 2.0
1.22k stars 89 forks source link

T5 model support #33

Open ravindra-ut opened 1 year ago

ravindra-ut commented 1 year ago

Hi, Can you please provide the support for T5 model inference. I see that only decoder models are supported https://github.com/ray-project/aviary/tree/master/models/static_batching

Thanks

avnishn commented 1 year ago

hi @ravindra-ut, I was able to deploy flan t5 xl on aviary with aviary v0.2.0 and the following model yaml. I ran using 1 a10 gpu

deployment_config:
  autoscaling_config:
    min_replicas: 1
    initial_replicas: 1
    max_replicas: 1
    target_num_ongoing_requests_per_replica: 64.0
    metrics_interval_s: 10.0
    look_back_period_s: 30.0
    smoothing_factor: 1.0
    downscale_delay_s: 300.0
    upscale_delay_s: 60.0
  max_concurrent_queries: 2000
  ray_actor_options:
    resources:
      accelerator_type_a10: 0.01
engine_config:
  model_id: google/flan-t5-xl
  type: TextGenerationInferenceEngine
  model_init_kwargs:
    trust_remote_code: true
  scheduler:
    policy:
      max_batch_total_tokens: 4096
      max_batch_prefill_tokens: 2048
      max_input_length: 511
      max_total_tokens: 512
  generation:
    generate_kwargs:
      do_sample: true
      temperature: 0.4
      top_p: 1.0
      ignore_eos_token: false
    prompt_format:
      system: "{instruction}"
      assistant: "{instruction}"
      trailing_assistant: ""
      user: "{instruction}"
      default_system_message: ""
    stopping_sequences: ["<unk>", "</s>"]
scaling_config:
  num_workers: 1
  num_gpus_per_worker: 1
  num_cpus_per_worker: 8
  placement_strategy: "STRICT_PACK"
  resources_per_worker:
    accelerator_type_a10: 0.01

I'm going to close this for now since I don't think this is an issue any more, however please reopen if there are any other issues.

waleedkadous commented 1 year ago

Could we please check this in to a repo as a supported LLM config?

ravindra-ut commented 1 year ago

Thanks @avnishn . Is the t5 supported with vLLM or TGI ?

akshay-anyscale commented 10 months ago

T5 is not supported with vLLM right now. We can pick this one when vLLM has support - https://github.com/vllm-project/vllm/issues/187