BerriAI / litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate, Groq (100+ LLMs)
https://docs.litellm.ai/docs/
Other
10.76k stars 1.22k forks source link

[Bug]: validate_environment() doesn't seem to provide the correct response for some models? #3190

Closed paul-gauthier closed 2 months ago

paul-gauthier commented 3 months ago

What happened?

validate_environment() works fine for gpt-3.5-turbo and claude-3-sonnet-20240229:

But for models like gemini/gemini-1.5-pro-latest or command-r-plus it always returns keys_in_environment = False and missing_keys = [] regardless of whether the needed API key variable is set in the environment or not. It always claims the keys aren't set. And it never returns the missing keys that need to be set.

Below is a minimal script to show this. For each of the 4 models, it calls validate_environment() and then completion(). First it does this with a properly setup environment, as you can see all the completion() calls succeed. Then it erases the environment and does this again for all 4 models. This time you can see all the completion() calls fail because there are no keys in the environment.

But as noted above, validate_environment() doesn't ever work correctly for gemini/gemini-1.5-pro-latest or command-r-plus.

Here is the script:

import litellm
litellm.suppress_debug_info = True

def check_model(model):
    print("model:", model)

    res = litellm.validate_environment(model)
    print("validate_environment:", res)

    try:
        messages=[{"role": "user", "content": "Hello!"}]
        response = litellm.completion(model=model, messages=messages)
        print("completion:", response.choices[0].message.content)
    except Exception as err:
        print("completion error:", err)

    print()

def check_all_models():
    check_model("gpt-3.5-turbo")
    check_model("claude-3-sonnet-20240229")
    #check_model("gemini-1.5-pro-latest")
    check_model("gemini/gemini-1.5-pro-latest")
    check_model("command-r-plus")

# check all the models WITH keys properly set in the environment
check_all_models()

# erase the environment, so validate_environment should tell me the env vars I need
os.environ = {}
print("erased environment")
print()

# check all the models WITHOUT keys properly set in the environment
check_all_models()

And here is the output:

model: gpt-3.5-turbo
validate_environment: {'keys_in_environment': True, 'missing_keys': []}
completion: Hello! How can I assist you today?

model: claude-3-sonnet-20240229
validate_environment: {'keys_in_environment': True, 'missing_keys': []}
completion: Hello! How can I assist you today?

model: gemini/gemini-1.5-pro-latest
validate_environment: {'keys_in_environment': False, 'missing_keys': []}
completion: Hello! 👋 How can I help you today? 😊 

model: command-r-plus
validate_environment: {'keys_in_environment': False, 'missing_keys': []}
completion: Hello! How can I help you today?

erased environment

model: gpt-3.5-turbo
validate_environment: {'keys_in_environment': False, 'missing_keys': ['OPENAI_API_KEY']}
completion error: OpenAIException - Traceback (most recent call last):
  File "/Users/gauthier/Projects/aider/.venv/lib/python3.11/site-packages/litellm/llms/openai.py", line 414, in completion
    raise e
  File "/Users/gauthier/Projects/aider/.venv/lib/python3.11/site-packages/litellm/llms/openai.py", line 350, in completion
    openai_client = openai(
                    ^^^^^^^
  File "/Users/gauthier/Projects/aider/.venv/lib/python3.11/site-packages/openai/_client.py", line 104, in __init__
    raise openaiError(
openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

model: claude-3-sonnet-20240229
validate_environment: {'keys_in_environment': False, 'missing_keys': ['ANTHROPIC_API_KEY']}
completion error: Missing Anthropic API Key - A call is being made to anthropic but no key is set either in the environment variables or via params

model: gemini/gemini-1.5-pro-latest
validate_environment: {'keys_in_environment': False, 'missing_keys': []}
completion error: PalmException - Invalid api key

model: command-r-plus
validate_environment: {'keys_in_environment': False, 'missing_keys': []}
completion error: {"message":"no api key supplied"}

Relevant log output

No response

Twitter / LinkedIn details

No response

krrishdholakia commented 3 months ago

hey @paul-gauthier acknowledging this issue.

Will fix + add better testing on our end for this.

paul-gauthier commented 3 months ago

FWIW, it appears that 242 of the models in litellm.model_cost.keys() are returning sane results.

95 models are returning the seemingly invalid {'keys_in_environment': False, 'missing_keys': []}

I used this script to enumerate them:

import litellm
litellm.suppress_debug_info = True

bad = set()
good = set()

models = litellm.model_cost.keys()
for model in models:
    res = litellm.validate_environment(model)
    missing_keys = res.get("missing_keys")
    keys_in_environment = res.get("keys_in_environment")

    if not keys_in_environment and not missing_keys:
        bad.add(model)
    else:
        good.add(model)

print('num good', len(good))
print()

print('num bad', len(bad))

for model in sorted(bad):
    print(model)

And got this output, listing all the bad models:

num good 242

num bad 95
anyscale/HuggingFaceH4/zephyr-7b-beta
anyscale/Mixtral-8x7B-Instruct-v0.1
anyscale/codellama/CodeLlama-34b-Instruct-hf
anyscale/meta-llama/Llama-2-13b-chat-hf
anyscale/meta-llama/Llama-2-70b-chat-hf
anyscale/meta-llama/Llama-2-7b-chat-hf
anyscale/mistralai/Mistral-7B-Instruct-v0.1
babbage-002
cloudflare/@cf/meta/llama-2-7b-chat-fp16
cloudflare/@cf/meta/llama-2-7b-chat-int8
cloudflare/@cf/mistral/mistral-7b-instruct-v0.1
cloudflare/@hf/thebloke/codellama-7b-instruct-awq
command-light
command-r
command-r-plus
davinci-002
deepinfra/01-ai/Yi-34B-200K
deepinfra/01-ai/Yi-34B-Chat
deepinfra/01-ai/Yi-6B-200K
deepinfra/Gryphe/MythoMax-L2-13b
deepinfra/Phind/Phind-CodeLlama-34B-v2
deepinfra/amazon/MistralLite
deepinfra/codellama/CodeLlama-34b-Instruct-hf
deepinfra/cognitivecomputations/dolphin-2.6-mixtral-8x7b
deepinfra/deepinfra/airoboros-70b
deepinfra/deepinfra/mixtral
deepinfra/jondurbin/airoboros-l2-70b-gpt4-1.4.1
deepinfra/lizpreciatior/lzlv_70b_fp16_hf
deepinfra/meta-llama/Llama-2-13b-chat-hf
deepinfra/meta-llama/Llama-2-70b-chat-hf
deepinfra/meta-llama/Llama-2-7b-chat-hf
deepinfra/mistralai/Mistral-7B-Instruct-v0.1
deepinfra/mistralai/Mixtral-8x7B-Instruct-v0.1
deepinfra/openchat/openchat_3.5
gemini-1.0-pro-vision
gemini-1.0-pro-vision-001
gemini-pro-vision
gemini/gemini-1.5-pro
gemini/gemini-1.5-pro-latest
gemini/gemini-pro
gemini/gemini-pro-vision
gpt-3.5-turbo-instruct
gpt-3.5-turbo-instruct-0914
groq/gemma-7b-it
groq/llama2-70b-4096
groq/llama3-70b-8192
groq/llama3-8b-8192
groq/mixtral-8x7b-32768
mistral/mistral-embed
mistral/mistral-large-2402
mistral/mistral-large-latest
mistral/mistral-medium
mistral/mistral-medium-2312
mistral/mistral-medium-latest
mistral/mistral-small
mistral/mistral-small-latest
mistral/mistral-tiny
mistral/open-mixtral-8x7b
palm/chat-bison
palm/chat-bison-001
palm/text-bison
palm/text-bison-001
palm/text-bison-safety-off
palm/text-bison-safety-recitation-off
perplexity/codellama-34b-instruct
perplexity/codellama-70b-instruct
perplexity/llama-2-70b-chat
perplexity/mistral-7b-instruct
perplexity/mixtral-8x7b-instruct
perplexity/pplx-70b-chat
perplexity/pplx-70b-online
perplexity/pplx-7b-chat
perplexity/pplx-7b-online
perplexity/sonar-medium-chat
perplexity/sonar-medium-online
perplexity/sonar-small-chat
perplexity/sonar-small-online
sagemaker/meta-textgeneration-llama-2-13b
sagemaker/meta-textgeneration-llama-2-13b-f
sagemaker/meta-textgeneration-llama-2-70b
sagemaker/meta-textgeneration-llama-2-70b-b-f
sagemaker/meta-textgeneration-llama-2-7b
sagemaker/meta-textgeneration-llama-2-7b-f
together-ai-20.1b-40b
together-ai-3.1b-7b
together-ai-40.1b-70b
together-ai-7.1b-20b
together-ai-up-to-3b
voyage/voyage-01
voyage/voyage-2
voyage/voyage-code-2
voyage/voyage-large-2
voyage/voyage-law-2
voyage/voyage-lite-01
voyage/voyage-lite-02-instruct
paul-gauthier commented 2 months ago

Any update here? Having the library validate and list the needed env variables is a pretty core feature. It really helps users as they try and connect to different providers.

krrishdholakia commented 2 months ago

hey @paul-gauthier missed this - my bad!

Let me work on this today.

krrishdholakia commented 2 months ago

Tracking missing providers here:

Providers

Models

paul-gauthier commented 2 months ago

I installed v1.35.35.dev1 which has these changes, and things seem better! But Cohere is still not returning keys.

model: command-r-plus
validate_environment: {'keys_in_environment': False, 'missing_keys': []}

My enumeration scrip uncovers a few other models which are in model_cost but which validate_environment() returns the seemingly invalid {'keys_in_environment': False, 'missing_keys': []}. You can see Cohere's models are in this list too.

num bad 12
babbage-002
command-light
command-r
command-r-plus
davinci-002
gpt-3.5-turbo-instruct
gpt-3.5-turbo-instruct-0914
together-ai-20.1b-40b
together-ai-3.1b-7b
together-ai-40.1b-70b
together-ai-7.1b-20b
together-ai-up-to-3b

Would it make sense to add the enumeration script as a test? Possibly with a block list if some entries of model_cost aren't legit, or should they be pruned from model_cost in that case?

krrishdholakia commented 2 months ago

Yup - i'm planning on adding the script you shared as the unit test. Fixing a stability issue right now.

Thanks for testing this! @paul-gauthier