google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.61k stars 322 forks source link

Safety Rating Mechanism faulty #258

Open Ruhil-DS opened 7 months ago

Ruhil-DS commented 7 months ago

Description of the bug:

I got a StopCandidateException error while running a simple "not harmful" prompt

Tech used:

Below are the other relevant details that may be used to reproduce:

Template

template = """Solve the math word problem given to you between triple backticks. \
You need not give any explanation. Convert the word problem to numerical \
equation and then a direct answer would be appreciated.\
\
problem: ```{problem}```
"""
problem = """two plus two"""

Other code:

from langchain.prompts import ChatPromptTemplate

lc_template = ChatPromptTemplate.from_template(template)
input_prompt = lc_template.format_messages(problem=problem)
resp = chat(input_prompt)

Actual vs expected behavior:

Expected output

2+2 = 4

Actual output

I got the following error:

...
StopCandidateException: index: 0
finish_reason: SAFETY
safety_ratings {
  category: HARM_CATEGORY_SEXUALLY_EXPLICIT
  probability: NEGLIGIBLE
}
safety_ratings {
  category: HARM_CATEGORY_HATE_SPEECH
  probability: LOW
}
safety_ratings {
  category: HARM_CATEGORY_HARASSMENT
  probability: MEDIUM
}
safety_ratings {
  category: HARM_CATEGORY_DANGEROUS_CONTENT
  probability: NEGLIGIBLE
}

Any other information you'd like to share?

No response

Vishal-42 commented 6 months ago

I'm also facing the same issues any updates?

ashokwankhede commented 5 months ago

Same here. I'm also facing the same issues any updates?

cfperez commented 3 months ago

You can adjust the safety settings in the Gemini API https://ai.google.dev/gemini-api/docs/safety-settings