BerriAI / litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
https://docs.litellm.ai/docs/
Other
10.32k stars 1.15k forks source link

[Bug]: Litellm Mistral Hugging Face #514

Closed StanGirard closed 9 months ago

StanGirard commented 9 months ago

What happened?

A bug happened!

image

Relevant log output

backend-core  | INFO:     192.168.65.1:58627 - "POST /chat/b6ee0d6a-94f9-4dca-8540-159a6a5800b5/question/stream?brain_id= HTTP/1.1" 200 OK
backend-core  | 2023-10-03 08:39:58,238:WARNING - Retrying langchain.chat_models.litellm.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: LiteLLM.Exception: Unsupported parameters passed: n.
backend-core  | 2023-10-03 08:40:02,244:WARNING - Retrying langchain.chat_models.litellm.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: LiteLLM.Exception: Unsupported parameters passed: n.
backend-core  | 2023-10-03 08:40:06,269:WARNING - Retrying langchain.chat_models.litellm.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: LiteLLM.Exception: Unsupported parameters passed: n.
backend-core  | 2023-10-03 08:40:10,299:WARNING - Retrying langchain.chat_models.litellm.acompletion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised APIError: LiteLLM.Exception: Unsupported parameters passed: n.
backend-core  | 2023-10-03 08:40:18,309:WARNING - Retrying langchain.chat_models.litellm.acompletion_with_retry.<locals>._completion_with_retry in 10.0 seconds as it raised APIError: LiteLLM.Exception: Unsupported parameters passed: n.

Twitter / LinkedIn details

No response

HallerPatrick commented 9 months ago

This is also happening when using a VertexAI model. In my case codechat-bison.

minh-v commented 9 months ago

Getting this bug too, running Mistral as well, trying to run ChatLiteLLM through an LLMChain.
ValueError: LiteLLM.Exception: Unsupported parameters passed: n

krrishdholakia commented 9 months ago

Hi @StanGirard @HallerPatrick @minh-v

This is occurring due to passing in args that aren't being translated to the provider.

What should the expected behaviour here be?

krrishdholakia commented 9 months ago

Looks like we need to add support for TGI 'n' param - doing so right now.

krrishdholakia commented 9 months ago

Fix pushed - 512769e84105a24fd7118ba18b71805ab94c28d3

Will update ticket, once solution is in prod

krrishdholakia commented 9 months ago

also created a ticket to prevent future issues like this - https://github.com/BerriAI/litellm/issues/517

Please add any comments / ideas.

krrishdholakia commented 9 months ago

fix pushed in v0.1.816

can you check and confirm if this works for you @StanGirard / @HallerPatrick / @minh-v

minh-v commented 9 months ago

No, getting the same error:

in _check_valid_arg
raise ValueError("LiteLLM.Exception: Unsupported parameters passed: {}".format(', '.join(unsupported_params)))
ValueError: LiteLLM.Exception: Unsupported parameters passed: n
minh-v commented 9 months ago

The error for Mistral is fixed if you add those same changes to the together_ai section of the fix.

    elif custom_llm_provider == "together_ai":
        ## check if unsupported param passed in 
        supported_params = ["stream", "temperature", "max_tokens", "top_p", "stop", "frequency_penalty", "n"]
        _check_valid_arg(supported_params=supported_params)

        if stream:
            optional_params["stream_tokens"] = stream
        if temperature:
            optional_params["temperature"] = temperature
        if top_p:
            optional_params["top_p"] = top_p
        if max_tokens:
            optional_params["max_tokens"] = max_tokens
        if n:
            optional_params["best_of"] = n
krrishdholakia commented 9 months ago

@minh-v looks like together ai doesn't support the n token https://docs.together.ai/reference/inference

I know langchain sends n=1 by default. Are you setting this param for together ai?

if not, i can introduce a default value check

krrishdholakia commented 9 months ago

fix pushed - e834c063ff4a1632d9224976760a0cc35d7000b1

krrishdholakia commented 9 months ago

@minh-v will update once this is in prod.

I've pushed a patch to check if n=1 (langchain default), if so, it won't raise an issue.

krrishdholakia commented 9 months ago

closing as this is now resolved.

krrishdholakia commented 9 months ago

@minh-v re-open if v0.1.819 didn't solved your problem.