Closed piotrekwisniewski closed 1 month ago
This issue(see #559)) is fixed in v0.8.3. Upgrading may resolve the problem:
pip install -U google-generativeai
@piotrulawisniewski , with the latest SDK version, it returns a clearer statement for the safety error.
In fact, it's a good suggestion to have it return which phrase violates the security setting.
@Hamza-nabil @Gunand3043 now I see the point. I've installed SDK 5 days ago- which was a moment before an update (Oct 7, 2024). Yesterday I've upgraded the package and output is different and it refers to security setting.
Funny time coincidence for me cause it was first time that I've plug in API to my project, so at first I thought that I've been making some programming error - if I'll try to do it after Oct 7, I wouldn't even notice that case :)
Thanks to all for clarify things for me!
BR
Thanks @Hamza-nabil!
Description of the bug:
Hi, I'm a newbie to using Gemini API, but I've found strange action that is taken by Gemini model.
I don't even know if this is kind an issue that should be given to you as a bug feedback- cause it's not a programming bug but rather strange perception of model to some word. I wanted to join the Gemini API to my porfolio code- which is a programme to help exchange gifts to return some help if somedoby doesn't know what to wish.
When using Gemini API quickstart everything works fine, but when I prompt other phrase I get an error.
My code (simplified- just to show the problem):
Actual vs expected behavior:
For this code I get an error:
What a concern was when I started investigate this issue and it occurs that word "romantic" raises this exception! I realize that it was not any programming bug as I thought at the beggining, but safety_setting.
Then I've tried with some violent prompt: 'Hi, could you help me and generate 3 gift suggestions up to £150? I like fishing and s*x movies.'
And then model have take accurate action, cause it returns:
"I understand you're looking for gift suggestions, but I'm programmed to provide safe and ethical responses. I can't recommend gifts related to adult content.
However, I can help you find great fishing gifts! Here are 3 options under £150: ..."
Proposed behavior:
It will be clearer if API will raise a ValueError with information which phrase violates security_setting insted of above
REMARK:
I don't know if Gemini API is automatically upgrading cause I've been asking Gemini for help with that error- and now (after ca ~3h) when I'm trying to activate above error I get different answer for the same code:
ValueError: ("Invalid operation: The
response.text
quick accessor requires the response to contain a validPart
, but none were returned. The candidate's finish_reason is 3. The candidate's safety_ratings are: [category: HARM_CATEGORY_SEXUALLY_EXPLICIT\nprobability: MEDIUM\n, category: HARM_CATEGORY_HATE_SPEECH\nprobability: NEGLIGIBLE\n, category: HARM_CATEGORY_HARASSMENT\nprobability: NEGLIGIBLE\n, category: HARM_CATEGORY_DANGEROUS_CONTENT\nprobability: NEGLIGIBLE\n].", [category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: MEDIUM , category: HARM_CATEGORY_HATE_SPEECH probability: NEGLIGIBLE , category: HARM_CATEGORY_HARASSMENT probability: NEGLIGIBLE , category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE ])Any other information you'd like to share?
So I spent few hours wondering and investigating what programming bug I've made, and finnaly it occurs that API raised wrong error- for word 'romantic' :)
And what a bad luck, that I use forbidden phrase "romantic movie" :) Or maybe it's a good luck, cause now I know how security_settings works :)
And as a I noticed above- I don't know if Gemini can upgrade Gemini API automatically or it's just a coincidence that sb changes this today, but now it returns clearer statement for this error.
Take care!