scaleapi / llm-engine

Scale LLM Engine public repository
https://llm-engine.scale.com
Apache License 2.0
781 stars 56 forks source link

Control frequency - completion #277

Open Stealthwriter opened 1 year ago

Stealthwriter commented 1 year ago

Hi,

Is there a way to change the frequency_penality or logit bias when sending a completion request?

yixu34 commented 1 year ago

Hi @Stealthwriter , thanks for reaching out. Yes, you can basically route any API changes through to the underlying inference framework(s) we use, assuming they support the fields you need. For instance, we currently support https://github.com/scaleapi/open-tgi (forked from text-generation-inference v0.9.4) and vLLM. Would you like to try making the change yourself?

Stealthwriter commented 1 year ago

How does llm engine differs from TGI and VLLM?

yixu34 commented 1 year ago

You can think of LLM Engine as adding 1) a set of higher-level abstractions (e.g. APIs are expressed in terms of Completions and Fine-tunes) and 2) autoscaling via k8s. TGI and vLLM are great but you have to bring your own scaling.