google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.46k stars 288 forks source link

Issue with Safety Rating Mechanism in Gemini-Pro Model #144

Closed kuziTony closed 7 months ago

kuziTony commented 9 months ago

Description of the bug:

I am writing to report a bug I encountered while using the Gemini-Pro model. The issue pertains to the safety ratings mechanism, where the input and output safety ratings are inconsistent, specifically for the category of "Sexually Explicit" content.

input safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE }

response.prompt_feedback

block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: HIGH } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE }

The issue is that despite providing negligible probabilities for all harm categories, including "Sexually Explicit" content, the model's output unexpectedly rates the "Sexually Explicit" category as high. This seems to be an error in the model's safety rating system, as the input data does not align with the output.

I believe this to be a bug in the API's review mechanism and would appreciate it if your team could investigate and resolve this issue promptly. The accuracy and reliability of safety ratings are crucial for my usage of the Gemini-Pro model.

Actual vs expected behavior:

No response

Any other information you'd like to share?

No response

williamito commented 9 months ago

Thanks for your report. Are you able to share the specific prompt which is causing this issue?

mlamothe commented 9 months ago

I'm not OP but this is trivial to replicate: Just go to Google AI Studio, turn the safety settings down to "low" and use the following a harmless prompt like this: I said, "let's watch 'Sex and the City'"

You'll trigger the "Sexually Explicit" filter. The bar is so ridiculously low as to make creative writing virtually impossible.

udayzee05 commented 8 months ago

I am getting an following safety error for what prompt "what is 1+1" `(/home/wolverine/llama2/venv) wolverine@wolverine-GP66-Leopard-11UG:~/llama2/tchat$ python main.py

what is 1+1 Traceback (most recent call last): File "/home/wolverine/llama2/tchat/main.py", line 27, in result = chain({"content": content}) File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 312, in call raise e File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 306, in call self._call(inputs, run_manager=run_manager) File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 103, in _call response = self.generate([inputs], run_manager=run_manager) File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain/chains/llm.py", line 115, in generate return self.llm.generate_prompt( File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, kwargs) File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate raise e File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate self._generate_with_cache( File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache return self._generate( File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 550, in _generate response: genai.types.GenerateContentResponse = _chat_with_retry( File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 140, in _chat_with_retry return _chat_with_retry(kwargs) File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 289, in wrapped_f return self(f, *args, *kw) File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 379, in call do = self.iter(retry_state=retry_state) File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 314, in iter return fut.result() File "/home/wolverine/llama2/venv/lib/python3.9/concurrent/futures/_base.py", line 433, in result return self.get_result() File "/home/wolverine/llama2/venv/lib/python3.9/concurrent/futures/_base.py", line 389, in get_result raise self._exception File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/tenacity/init.py", line 382, in call result = fn(args, kwargs) File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 138, in _chat_with_retry raise e File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/langchain_google_genai/chat_models.py", line 131, in _chat_with_retry return generation_method(kwargs) File "/home/wolverine/llama2/venv/lib/python3.9/site-packages/google/generativeai/generative_models.py", line 384, in send_message raise generation_types.StopCandidateException(response.candidates[0]) google.generativeai.types.generation_types.StopCandidateException: index: 0 content { parts { text: "2" } role: "model" } finish_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: LOW } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: MEDIUM } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE }`

rayanfer32 commented 8 months ago

Looks like the minimum profanity thresold users are allowed to configure is BLOCK_ONLY_HIGH

image

You can check it by reducing the safety settings in the right side and lowering the safety to minimum and clicking on the get code option.

github-actions[bot] commented 8 months ago

Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.

github-actions[bot] commented 7 months ago

This issue was closed because it has been inactive for 28 days. Please post a new issue if you need further assistance. Thanks!

aliasghar5124 commented 7 months ago

i also face this issue can anyone help to handle this

ritvikforcebolt commented 6 months ago

i have created a chatbot using gemini it working good but when i ask question like what is 1+1

it is giving error

2024-03-18 18:57:23.751 Uncaught app exception Traceback (most recent call last): File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script exec(code, module.dict) File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\app.py", line 144, in model_response = get_response(user_input, st.session_state['API_Key']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\app.py", line 128, in get_response response = st.session_state['conversation'].predict(input=user_input) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\llm.py", line 293, in predict return self(kwargs, callbacks=callbacks)[self.output_key] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core_api\deprecation.py", line 145, in warning_emitting_wrapper return wrapped(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\base.py", line 378, in call return self.invoke( ^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\base.py", line 163, in invoke raise e File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\base.py", line 153, in invoke self._call(inputs, run_manager=run_manager) File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\llm.py", line 103, in _call response = self.generate([inputs], run_manager=run_manager) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain\chains\llm.py", line 115, in generate return self.llm.generate_prompt( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 571, in generate_prompt return self.generate(prompt_messages, stop=stop, callbacks=callbacks, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 434, in generate
raise e File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 424, in generate
self._generate_with_cache( File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 608, in _generate_with_cache result = self._generate( ^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 555, in _generate response: genai.types.GenerateContentResponse = _chat_with_retry( ^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 152, in _chat_with_retry
return _chat_with_retry(kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity__init.py", line 289, in wrapped_f return self(f, *args, **kw) ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity__init.py", line 379, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity\init__.py", line 314, in iter return fut.result() ^^^^^^^^^^^^ File "C:\Users\CA\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 449, in result return self.get_result() ^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures_base.py", line 401, in get_result raise self._exception File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\tenacity\init.py", line 382, in call__ result = fn(*args, kwargs) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 150, in _chat_with_retry
raise e File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\langchain_google_genai\chat_models.py", line 134, in _chat_with_retry
return generation_method(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\google\generativeai\generative_models.py", line 434, in send_message
self._check_response(response=response, stream=stream) File "C:\Users\CA\Documents\ritvik_chauhan\ritvik personal\langchain_project\chatgpt clone with summarization\chat\Lib\site-packages\google\generativeai\generative_models.py", line 461, in _check_response
raise generation_types.StopCandidateException(response.candidates[0]) google.generativeai.types.generation_types.StopCandidateException: index: 0 finish_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: LOW } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: MEDIUM } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE }

why it is giving this error

GIDDY269 commented 4 months ago

has this issue been solved or what was you work around

mlamothe commented 4 months ago

I gave up on Google a long time ago, but I was just reading on Reddit today that other people still can't get this LLM to do anything - someone said it refused to give them code to delete some files, LOL. I use a bunch of other LLMs instead, including Llama 3 on Groq and Claude.

h777arsh commented 4 months ago

This is really bizarre. I’m using Gemini-1.5-pro-preview-0409 with Vertex AI to check the Raft Research paper PDF, and it throws the error mentioned above.

GIDDY269 commented 4 months ago

Like the what the other guy said , i gave up on google, try using another model like llama3 on groq @h777arsh