google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.55k stars 310 forks source link

google.generativeai.types.generation_types.StopCandidateException: index: 0 #384

Closed wencan closed 4 months ago

wencan commented 5 months ago

Description of the bug:

code:

"""
Install the Google AI Python SDK

$ pip install google-generativeai

See the getting started guide for more information:
https://ai.google.dev/gemini-api/docs/get-started/python
"""

import os

import google.generativeai as genai

genai.configure(api_key='XXX')

# Create the model
# See https://ai.google.dev/api/python/google/generativeai/GenerativeModel
generation_config = {
  "temperature": 1,
  "top_p": 0.95,
  "top_k": 64,
  "max_output_tokens": 8192,
  "response_mime_type": "text/plain",
}

model = genai.GenerativeModel(
  model_name="gemini-1.5-pro",
  generation_config=generation_config,
  # safety_settings = Adjust safety settings
  # See https://ai.google.dev/gemini-api/docs/safety-settings
  system_instruction="This text is from the ZeroMQ guide, the field of knowledge is computer science, the original text is in English, please translate into Chinese, do not use machine translation style, do not add translation notes.",
)

chat_session = model.start_chat()

response = chat_session.send_message("By Pieter Hintjens, CEO of iMatix")
print(response.text)
response = chat_session.send_message("Please use the issue tracker for all comments and errata. This version covers the latest stable release of ZeroMQ (3.2). If you are using older versions of ZeroMQ then some of the examples and explanations won’t be accurate.")
print(response.text)
response = chat_session.send_message("The Guide is originally in C , but also in PHP , Java , Python , Lua , and Haxe . We’ve also translated most of the examples into C++, C#, CL, Delphi, Erlang, F#, Felix, Haskell, Julia, Objective-C, Ruby, Ada, Basic, Clojure, Go, Haxe, Node.js, ooc, Perl, Scala, and Rust.")
print(response.text)
response = chat_session.send_message("Preface")
print(response.text)
response = chat_session.send_message("ZeroMQ in a Hundred Words")
print(response.text)
response = chat_session.send_message("ZeroMQ (also known as ØMQ, 0MQ, or zmq) looks like an embeddable networking library but acts like a concurrency framework. It gives you sockets that carry atomic messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N-to-N with patterns like fan-out, pub-sub, task distribution, and request-reply. It’s fast enough to be the fabric for clustered products. Its asynchronous I/O model gives you scalable multicore applications, built as asynchronous message-processing tasks. It has a score of language APIs and runs on most operating systems. ZeroMQ is from iMatix and is LGPLv3 open source.")
print(response.text)
response = chat_session.send_message("How It Began")
print(response.text)
response = chat_session.send_message("We took a normal TCP socket, injected it with a mix of radioactive isotopes stolen from a secret Soviet atomic research project, bombarded it with 1950-era cosmic rays, and put it into the hands of a drug-addled comic book author with a badly-disguised fetish for bulging muscles clad in spandex. Yes, ZeroMQ sockets are the world-saving superheroes of the networking world.")
print(response.text)

Actual vs expected behavior:

actual:

Traceback (most recent call last):
  File "/home/wencan/Projects/py_main/zguide/test.py", line 50, in <module>
    response = chat_session.send_message("We took a normal TCP socket, injected it with a mix of radioactive isotopes stolen from a secret Soviet atomic research project, bombarded it with 1950-era cosmic rays, and put it into the hands of a drug-addled comic book author with a badly-disguised fetish for bulging muscles clad in spandex. Yes, ZeroMQ sockets are the world-saving superheroes of the networking world.")
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/wencan/.local/lib/python3.12/site-packages/google/generativeai/generative_models.py", line 514, in send_message
    self._check_response(response=response, stream=stream)
  File "/home/wencan/.local/lib/python3.12/site-packages/google/generativeai/generative_models.py", line 542, in _check_response
    raise generation_types.StopCandidateException(response.candidates[0])
google.generativeai.types.generation_types.StopCandidateException: index: 0
finish_reason: SAFETY
safety_ratings {
  category: HARM_CATEGORY_SEXUALLY_EXPLICIT
  probability: MEDIUM
}
safety_ratings {
  category: HARM_CATEGORY_HATE_SPEECH
  probability: NEGLIGIBLE
}
safety_ratings {
  category: HARM_CATEGORY_HARASSMENT
  probability: LOW
}
safety_ratings {
  category: HARM_CATEGORY_DANGEROUS_CONTENT
  probability: NEGLIGIBLE
}

Any other information you'd like to share?

OS: Fedora Linux 40 (Workstation Edition) x86_64 Host: 21D0 ThinkBook 14 G4+ ARA Kernel: 6.8.11-300.fc40.x86_64 Uptime: 2 hours, 46 mins Packages: 2663 (rpm), 42 (flatpak) Shell: bash 5.2.26 Resolution: 2880x1800 DE: GNOME 46.2 WM: Mutter WM Theme: Adwaita Theme: Adwaita [GTK2/3] Icons: Adwaita [GTK2/3] Terminal: gnome-terminal CPU: AMD Ryzen 5 6600H with Radeon Graphics (12) @ 4.564GHz GPU: AMD ATI Radeon 680M Memory: 4608MiB / 13649MiB

Python 3.12.3 google-ai-generativelanguage 0.6.4 google-api-core 2.19.0 google-api-python-client 2.132.0 google-auth 2.30.0 google-auth-httplib2 0.2.0 google-generativeai 0.6.0 google-pasta 0.2.0 googleapis-common-protos 1.63.1

singhniraj08 commented 5 months ago

@wecan, This error is not from the python SDK side and it's from the Gemini API side. The last message of the chat_session is blocked by Gemini because of Safety reasons. Thank you!

github-actions[bot] commented 4 months ago

Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.

jpchavat commented 3 months ago

Any idea? I'm trying to translate a text that gemini created by itself and I also receive the same error

StopCandidateException(index: 0
finish_reason: OTHER
)

It's killing my production launch. :'(

model_name="gemini-1.5-flash"
safety_settings={
    HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
    HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
    HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
    HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
}

Library version: google-generativeai~=0.7.2