BerriAI / litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
https://docs.litellm.ai/docs/
Other
10.54k stars 1.19k forks source link

[Bug]: Together AI not in provider list #3232

Closed RomanKoshkin closed 2 months ago

RomanKoshkin commented 2 months ago

What happened?

I try to reproduce the code from the docs:


import litellm
# Create your own custom prompt template 

model="togethercomputer/llama-2-70b-chat"

litellm.register_prompt_template(
        model=model,
        initial_prompt_value="You are a good assistant" # [OPTIONAL]
        roles={
            "system": {
                "pre_message": "[INST] <<SYS>>\n", # [OPTIONAL]
                "post_message": "\n<</SYS>>\n [/INST]\n" # [OPTIONAL]
            },
            "user": { 
                "pre_message": "[INST] ", # [OPTIONAL]
                "post_message": " [/INST]" # [OPTIONAL]
            }, 
            "assistant": {
                "pre_message": "\n" # [OPTIONAL]
                "post_message": "\n" # [OPTIONAL]
            }
        }
        final_prompt_value="Now answer as best you can:" # [OPTIONAL]
)

completion(model=model, messages=messages)

and it errors out with:

BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=togethercomputer/llama-2-70b-chat Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

This error doesn't happen for model="groq/llama3-8b-8192".

Relevant log output

BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=togethercomputer/llama-2-70b-chat
Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

Twitter / LinkedIn details

No response

ishaan-jaff commented 2 months ago

@RomanKoshkin please use model="together_ai/llama-2-70b-chat"

ishaan-jaff commented 2 months ago

feel free to re-open if that does not fix the issue for you