BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
13.39k stars 1.57k forks source link

[Feature]: allow calling validate_environment with an api_key #4375

Closed aantn closed 3 months ago

aantn commented 4 months ago

The Feature

Please add an api_key argument to validate_environment() which will validate the environment taking the api_key into account if it is not-None.

Motivation, pitch

I can't use validate_environment to check if all parameters are present because sometimes users pass api keys via a cli argument which I pass into completion(). In this case, validate_environment always returns False.

Twitter / LinkedIn details

No response

krrishdholakia commented 3 months ago

Implemented here - https://github.com/BerriAI/litellm/commit/1ba3fcc3fbafddc5b05a522ff82e9cc077a26f31

Here's an example of using it @aantn

from litellm.utils import validate_environment 

response_obj = validate_environment(model="gpt-3.5-turbo", api_key="sk-my-test-key")
    assert (
        response_obj["keys_in_environment"] is True
    ), f"Missing keys={response_obj['missing_keys']}"