BerriAI / litellm

Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
https://docs.litellm.ai/docs/
Other
10.3k stars 1.15k forks source link

[Bug]: Gemini appears to randomly block content as if it were violating some sort of hidden Content Filter #2866

Open krrishdholakia opened 3 months ago

krrishdholakia commented 3 months ago

What happened?

tldr;

Additional requests:

Relevant log output

[92m17:46:57 - LiteLLM Router:INFO[0m: router.py:479 - litellm.acompletion(model=vertex_ai/gemini-1.0-pro)[31m Exception VertexAIException - Content has no parts.[0m

Twitter / LinkedIn details

cc: @dkindlund

Manouchehri commented 3 months ago

Semi-related question, how should we set the safety_settings in a prompt with the OpenAI library, and/or LiteLLM config?

Is this correct?

response = client.chat.completions.create(
    model="gemini-experimental",
    messages=[
        {
            "role": "user",
            "content": "Can you write exploits?",
        }
    ],
    max_tokens=8192,
    stream=False,
    temperature=0.0,

    extra_body={
        "safety_settings": [
            {
                "category": "HARM_CATEGORY_HARASSMENT",
                "threshold": "BLOCK_NONE",
            },
            {
                "category": "HARM_CATEGORY_HATE_SPEECH",
                "threshold": "BLOCK_NONE",
            },
            {
                "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
                "threshold": "BLOCK_NONE",
            },
            {
                "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
                "threshold": "BLOCK_NONE",
            },
        ],
    }
)

And what about this? :)

model_list:
  - model_name: gemini-experimental
    litellm_params:
      model: vertex_ai/gemini-experimental
      vertex_project: litellm-epic
      vertex_location: us-central1
      safety_settings:
      - category: HARM_CATEGORY_HARASSMENT
        threshold: BLOCK_NONE
      - category: HARM_CATEGORY_HATE_SPEECH
        threshold: BLOCK_NONE
      - category: HARM_CATEGORY_SEXUALLY_EXPLICIT
        threshold: BLOCK_NONE
      - category: HARM_CATEGORY_DANGEROUS_CONTENT
        threshold: BLOCK_NONE
krrishdholakia commented 3 months ago

I believe both should work @Manouchehri

Here's how we handle it in code:

Screenshot 2024-04-05 at 1 52 16 PM

Update: added the examples you shared on docs - https://docs.litellm.ai/docs/providers/vertex#specifying-safety-settings

Manouchehri commented 3 months ago

You're right, it seems to be working. I'm just shocked I guessed it on the first try. =p Thanks!

Manouchehri commented 2 months ago

It would be awesome if the default Vertex AI calls through the proxy were to just simply turn off all content filtering by default...

I would recommend against this. My fear is Google will block BLOCK_NONE even more if too many people use it. =/