huggingface / text-generation-inference

Large Language Model Text Generation Inference
http://hf.co/docs/text-generation-inference
Apache License 2.0
8.71k stars 1.01k forks source link

OpenAI supports `top_p = 0.0` and `top_p = 1.0` but TGI fails with a validation error with either of these values. #2222

Closed michael-newsrx closed 2 weeks ago

michael-newsrx commented 1 month ago

System Info

Information

Tasks

Reproduction

Fails

ep1: InferenceEndpoint = inference_endpoint1()
    while ep1.status != "running":
        if ep1.status == "failed":
            raise RuntimeError(f"Failed to create inference endpoint: {ep1.name}")
        ep1.wait(timeout=1)

    import openai
    client = openai.OpenAI(  #
            base_url=ep1.url + "/v1",  #
            api_key=hf_bearer_token(),  #
    )

    # print(f"Available models: {client.models.list()}")
    role_system = {"role": "system", "content": "I am an evil robot overlord."}
    role_user = {"role": "user", "content": "What is your command? Be very succinct."}
    chat_completion = client.chat.completions.create(model="tgi",  #
                                                     messages=[role_system, role_user],  #
                                                     stream=True,  #
                                                     max_tokens=1024,  #
                                                     temperature=0.0,  #
                                                     top_p=1.0,  #
                                                     )

Works

ep1: InferenceEndpoint = inference_endpoint1()
    while ep1.status != "running":
        if ep1.status == "failed":
            raise RuntimeError(f"Failed to create inference endpoint: {ep1.name}")
        ep1.wait(timeout=1)

    import openai
    client = openai.OpenAI(  #
            base_url=ep1.url + "/v1",  #
            api_key=hf_bearer_token(),  #
    )

    # print(f"Available models: {client.models.list()}")
    role_system = {"role": "system", "content": "I am an evil robot overlord."}
    role_user = {"role": "user", "content": "What is your command? Be very succinct."}
    chat_completion = client.chat.completions.create(model="tgi",  #
                                                     messages=[role_system, role_user],  #
                                                     stream=True,  #
                                                     max_tokens=1024,  #
                                                     temperature=0.0,  #
                                                     top_p=0.99,  #
                                                     )

Expected behavior

See also: https://github.com/huggingface/text-generation-inference/issues/1896 where the patch did not address this issue even though raised as part of the ticket.

Impact

This generally breaks libraries like guidance where the library is hard coded to use top_p=1.0 for the OpenAI interface.

IQ179 commented 1 month ago

https://github.com/huggingface/text-generation-inference/blob/8511669cb29115bdf0bc2da5328e69d041030996/router/src/validation.rs#L248-L255

If you want to set top_p to 1.0, you can simply sending the top_p as none, which will result in the default value of 1.0 being applied.

It seems like the equal condition in the code is causing an error.

michael-conrad commented 1 month ago

This doesn't provide a resolution to the issue.

The docker container rejects top_p=1.0 for OpenAI interface, but OpenAI interface should accept top_p=1.0 and not fail.

Is there an easy way to "patch" the container and deploy using the patched version?

https://github.com/huggingface/text-generation-inference/blob/8511669cb29115bdf0bc2da5328e69d041030996/router/src/validation.rs#L248-L255

If you want to set top_p to 1.0, you can simply sending the top_p as none, which will result in the default value of 1.0 being applied.

It seems like the equal condition in the code is causing an error.

See also: https://github.com/guidance-ai/guidance/issues/945

ErikKaum commented 1 month ago

Hi @michael-newsrx

Thank you for bringing this to our attention and for making the PR 👍

As far as I can tell, there shouldn't be anything blocking for getting this merged in. I'll approve running the CI and can take over the merging of the PR.

michael-conrad commented 1 month ago

https://github.com/huggingface/text-generation-inference/pull/2231

github-actions[bot] commented 2 weeks ago

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

cornzz commented 1 day ago

@ErikKaum I ran into this issue as I switched out API base urls and suddenly my script broke as the new API uses TGI which doesnt allow top_p=1.0. I can work around this but it would be nice if TGI would allow this as I dont see why it is not allowed.

ErikKaum commented 1 day ago

Hi @cornzz 👋

I understand that it's annoying that it breaks the client. But I think still for now we're opting for a clear error VS discarding user input without letting the user know.

But if there's a lot of demand for the top_p=1.0 == no top_p alternative we're still open. One good way to get an "indication of demand" for that would be e.g. to get thumbs up on a issue that's a feature request.

Hopefully this makes sense to you 👍

michael-conrad commented 1 day ago

Depending on the client software. It could result in a breakage that prevents any use of TGI by a customer.

-Michael Conrad Telephone: +1.678.934.3989 Email: @. Telegram: https://t.me/mconrad202 Mastodon: @.

On Tue, Sep 3, 2024 at 9:47 AM Erik Kaunismäki @.***> wrote:

Hi @cornzz https://github.com/cornzz 👋

I understand that it's annoying that it breaks the client. But I think still for now we're opting for a clear error VS discarding user input without letting the user know.

But if there's a lot of demand for the top_p=1.0 == no top_p alternative we're still open. One good way to get an "indication of demand" for that would be e.g. to get thumbs up on a issue that's a feature request.

Hopefully this makes sense to you 👍

— Reply to this email directly, view it on GitHub https://github.com/huggingface/text-generation-inference/issues/2222#issuecomment-2326574074, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABH4XNG7FICQDFNQVTEVASLZUW4WRAVCNFSM6AAAAABKXPZBUWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRWGU3TIMBXGQ . You are receiving this because you commented.Message ID: @.***>

cornzz commented 1 day ago

Hey @ErikKaum, thanks for your quick response! Its not a problem, I was reusing a script and I am wondering why the authors set top_p at all since it defaults to 1.

Still, and sorry if I am misunderstanding something, but what do you mean by discarding user input? Maybe I am missing something, but why can't top_p be set to 1.0 by the user manually while not setting any value for top_p makes it default to 1.0?