deepjavalibrary / djl

An Engine-Agnostic Deep Learning Framework in Java
https://djl.ai
Apache License 2.0
4.14k stars 660 forks source link

Expose vLLM logprobs in model output #3491

Open CoolFish88 opened 1 month ago

CoolFish88 commented 1 month ago

Description

vLLM sampling parameters include a richer set of values, among which logprobs has a wider utility.

When testing by adding the logpobs option to the request payload, the model output schema was unchanged {"generated text": "_modeloutput"} suggesting it was not propagated to the output

Will this change the current api? How?

Probably by enriching the output schema.

Who will benefit from this enhancement?

Anyone who wants logprobs extracted from model predictions.

References

frankfliu commented 1 month ago

@sindhuvahinis

CoolFish88 commented 1 month ago

Found this while looking into CouldWatch logs:

The following parameters are not supported by vllm with rolling batch: {'max_tokens', 'seed', 'logprobs', 'temperature'}

siddvenk commented 1 month ago

What is the payload you are using to invoke the endpoint?

We do expose generation parameters that can be included in the inference request. Details are in https://docs.djl.ai/master/docs/serving/serving/docs/lmi/user_guides/lmi_input_output_schema.html.

We have slightly different names for some of the generation/sampling parameters - our API unifies different inference backends like vllm, tensorrt-llm, huggingface accelerate, and transformers-neuronx.

If you want to use a different API schema, we provide documentation on writing your own input/output parsers https://docs.djl.ai/master/docs/serving/serving/docs/lmi/user_guides/lmi_input_output_schema.html#custom-pre-and-post-processing.

We also support the OpenAI chat completions schema for chat type models https://docs.djl.ai/master/docs/serving/serving/docs/lmi/user_guides/chat_input_output_schema.html.