Open krrishdholakia opened 3 months ago
Semi-related question, how should we set the safety_settings
in a prompt with the OpenAI library, and/or LiteLLM config?
Is this correct?
response = client.chat.completions.create(
model="gemini-experimental",
messages=[
{
"role": "user",
"content": "Can you write exploits?",
}
],
max_tokens=8192,
stream=False,
temperature=0.0,
extra_body={
"safety_settings": [
{
"category": "HARM_CATEGORY_HARASSMENT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"threshold": "BLOCK_NONE",
},
],
}
)
And what about this? :)
model_list:
- model_name: gemini-experimental
litellm_params:
model: vertex_ai/gemini-experimental
vertex_project: litellm-epic
vertex_location: us-central1
safety_settings:
- category: HARM_CATEGORY_HARASSMENT
threshold: BLOCK_NONE
- category: HARM_CATEGORY_HATE_SPEECH
threshold: BLOCK_NONE
- category: HARM_CATEGORY_SEXUALLY_EXPLICIT
threshold: BLOCK_NONE
- category: HARM_CATEGORY_DANGEROUS_CONTENT
threshold: BLOCK_NONE
I believe both should work @Manouchehri
Here's how we handle it in code:
Update: added the examples you shared on docs - https://docs.litellm.ai/docs/providers/vertex#specifying-safety-settings
You're right, it seems to be working. I'm just shocked I guessed it on the first try. =p Thanks!
It would be awesome if the default Vertex AI calls through the proxy were to just simply turn off all content filtering by default...
I would recommend against this. My fear is Google will block BLOCK_NONE
even more if too many people use it. =/
What happened?
litellm.acompletion(model=vertex_ai/gemini-1.0-pro)[31m Exception VertexAIException - Content has no parts.[0m
https://github.com/GoogleCloudPlatform/generative-ai/issues/344#issuecomment-1945739479tldr;
Additional requests:
Relevant log output
Twitter / LinkedIn details
cc: @dkindlund