google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.63k stars 323 forks source link

Strange harmful word action #589

Closed piotrekwisniewski closed 1 month ago

piotrekwisniewski commented 1 month ago

Description of the bug:

Hi, I'm a newbie to using Gemini API, but I've found strange action that is taken by Gemini model.

I don't even know if this is kind an issue that should be given to you as a bug feedback- cause it's not a programming bug but rather strange perception of model to some word. I wanted to join the Gemini API to my porfolio code- which is a programme to help exchange gifts to return some help if somedoby doesn't know what to wish.

When using Gemini API quickstart everything works fine, but when I prompt other phrase I get an error.

My code (simplified- just to show the problem):

import os
import google.generativeai as genai

GOOGLE_API_KEY = os.getenv('GOOGLE_API_KEY')
genai.configure(api_key=GOOGLE_API_KEY)
model = genai.GenerativeModel('gemini-1.5-flash')

messages = [{'role':'user',
     'parts': ["Hi, could you help me and generate 3 gift suggestions up to £150? I like fishing and romantic movies."]}
]

response = model.generate_content(messages)
for chunk in response:
    print(chunk.text)

Actual vs expected behavior:

For this code I get an error:

Traceback (most recent call last):
  File "..\", line 34, in <module>
    print(chunk.text)
          ^^^^^^^^^^
  File "..\Python\Python312\site-packages\google\generativeai\types\generation_types.py", line 476, in text
    if candidate.finish_message:
       ^^^^^^^^^^^^^^^^^^^^^^^^
  File "..\Python\Python312\site-packages\proto\message.py", line 906, in __getattr__
    raise AttributeError(
AttributeError: Unknown field for Candidate: finish_message. Did you mean: 'finish_reason'?
Process finished with exit code 1

What a concern was when I started investigate this issue and it occurs that word "romantic" raises this exception! I realize that it was not any programming bug as I thought at the beggining, but safety_setting.

Then I've tried with some violent prompt: 'Hi, could you help me and generate 3 gift suggestions up to £150? I like fishing and s*x movies.'
And then model have take accurate action, cause it returns:

"I understand you're looking for gift suggestions, but I'm programmed to provide safe and ethical responses. I can't recommend gifts related to adult content.

However, I can help you find great fishing gifts! Here are 3 options under £150: ..."

Proposed behavior:

It will be clearer if API will raise a ValueError with information which phrase violates security_setting insted of above

REMARK:

I don't know if Gemini API is automatically upgrading cause I've been asking Gemini for help with that error- and now (after ca ~3h) when I'm trying to activate above error I get different answer for the same code:

ValueError: ("Invalid operation: The response.text quick accessor requires the response to contain a valid Part, but none were returned. The candidate's finish_reason is 3. The candidate's safety_ratings are: [category: HARM_CATEGORY_SEXUALLY_EXPLICIT\nprobability: MEDIUM\n, category: HARM_CATEGORY_HATE_SPEECH\nprobability: NEGLIGIBLE\n, category: HARM_CATEGORY_HARASSMENT\nprobability: NEGLIGIBLE\n, category: HARM_CATEGORY_DANGEROUS_CONTENT\nprobability: NEGLIGIBLE\n].", [category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: MEDIUM , category: HARM_CATEGORY_HATE_SPEECH probability: NEGLIGIBLE , category: HARM_CATEGORY_HARASSMENT probability: NEGLIGIBLE , category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE ])

Any other information you'd like to share?

So I spent few hours wondering and investigating what programming bug I've made, and finnaly it occurs that API raised wrong error- for word 'romantic' :)

And what a bad luck, that I use forbidden phrase "romantic movie" :) Or maybe it's a good luck, cause now I know how security_settings works :)

And as a I noticed above- I don't know if Gemini can upgrade Gemini API automatically or it's just a coincidence that sb changes this today, but now it returns clearer statement for this error.

Take care!

Hamza-nabil commented 1 month ago

This issue(see #559)) is fixed in v0.8.3. Upgrading may resolve the problem:

pip install -U google-generativeai
Gunand3043 commented 1 month ago

@piotrulawisniewski , with the latest SDK version, it returns a clearer statement for the safety error.

In fact, it's a good suggestion to have it return which phrase violates the security setting.

piotrekwisniewski commented 1 month ago

@Hamza-nabil @Gunand3043 now I see the point. I've installed SDK 5 days ago- which was a moment before an update (Oct 7, 2024). Yesterday I've upgraded the package and output is different and it refers to security setting.

Funny time coincidence for me cause it was first time that I've plug in API to my project, so at first I thought that I've been making some programming error - if I'll try to do it after Oct 7, I wouldn't even notice that case :)

Thanks to all for clarify things for me!

BR

MarkDaoust commented 1 month ago

Thanks @Hamza-nabil!