vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.95k stars 4.52k forks source link

JSON formatting issue #1191

Closed rounak610 closed 7 months ago

rounak610 commented 1 year ago

How to get the response from vLLM in a proper JSON format. Does vLLM support outlines, guidance or jsonformer libraries?

viktor-ferenczi commented 1 year ago

Indeed supporting guided generation would be more efficient, but I don't see adding support for any of the libraries you mentioned on the Roadmap.

Some workarounds until then

viktor-ferenczi commented 1 year ago

I think the best may be to add support (API) for the integration of guidance libraries, then provide an adapter for each of the popular libraries separately.

@WoosukKwon What do you think?

viktor-ferenczi commented 1 year ago

Related to #535

viktor-ferenczi commented 1 year ago

Mostly the same as #288

noamgat commented 1 year ago

LM Format Enforcer is a library that achieves this and supports vLLM. There is already a sample notebook showing vLLM integration. It currently uses monkeypatching, that will be removed when the next vLLM version with the logits processing API will be released.

(Disclosure: I am the author of the library) (Later edit - the version was released, and the example no longer uses monkeypatching)

arshadshk commented 11 months ago

@noamgat does LM Format Enforcer support api access to vllm ?

noamgat commented 11 months ago

@noamgat does LM Format Enforcer support api access to vllm ?

With LM Format Enforcer its the other way around - it integrates into the inference engine's pipeline. So if you have existing vLLM code, its easy to plug LMFE into it.

vLLM example notebook

wdhitchc commented 11 months ago

@noamgat

since its not an integration with the VLLM Server though, I cant just have an LLM deployment use the format enforcer against that, I need to have the model loaded locally/ in the sample place as the format enforcer code....

noamgat commented 11 months ago

Yes, this is a known issue. In order to solve it, we would need to be able to pass "logits processor instructions" in the network request of VLLM server. I proposed something similar to huggingface-inference-server in this draft PR meant for discussion, but did not get a response from the team yet so I didn't proceed with it.

If vLLM would be interested in adopting a similar solution, it would cause the LMFE to also work with server / multi GPU deployments.

viktor-ferenczi commented 11 months ago

That would be nice. Support should be added for LMQL as well, I think.

wdhitchc commented 11 months ago

Is that not what's happening here @noamgat @viktor-ferenczi https://github.com/vllm-project/vllm/pull/535

Any chance we can get this in?

noamgat commented 11 months ago

A similar PR to #535 (disclosure: mine) was already merged. This is how LMFE works with vllm - example notebook.

The challenge (that #535 does not cover) is how do we pass this information in a multiprocess / networked environment to the custom code. A serialization / "simple object" parametrization solution is required, which is what was proposed in the PR to huggingface-inference-server.

wdhitchc commented 11 months ago

@noamgat so maybe I'm not super clear on what's going on internally. I have not done my full due diligence and deep dived the code but....

I saw your PR that got in. I thought we just needed another one that utilizes what you put in, inside of the API/ openapi layer. My understanding was that the final comment on 535 was a request for him to rebase and utilize your code in the api layer.

I was thinking about making a dumb solution for myself where I actually install LMfE into VLLM, then create a new api endpoint that is essentially a parameterized version of the notebook you shared. I was thinking I'd have it inside the openapi compatible server and make it look like function calling. Not sure how efficient it would be but could get the job done quickly to accelerate me. This would probably just be a personal fork until the better solution gets in. I don't feel qualified to try and implement the correct pattern, but happy to give it a shot if you'll hold my hand a bit through the process.

rlouf commented 10 months ago

Outlines now provides a "fork" of vLLM's FastAPI deployment example, if that helps: https://outlines-dev.github.io/outlines/reference/vllm/

noamgat commented 10 months ago

Outlines now provides a "fork" of vLLM's FastAPI deployment example, if that helps: https://outlines-dev.github.io/outlines/reference/vllm/

That is super cool! Congrats on the release! If you want to avoid monkey patching the vLLM logits processor API, you can cache according to the generated token tuple instead of seq_id. I'm not sure how seq_id behaves with facilities such as beam searching etc.

hmellor commented 7 months ago

Support for guided decoding using outlines was merged in #2819