google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.46k stars 288 forks source link

<BlockedReason.OTHER: 2> with simple questions #54

Closed valenmoore closed 4 months ago

valenmoore commented 1 year ago

Palm has been working pretty well until I randomly started running into this error: filters=[{'reason': <BlockedReason.OTHER: 2>}], top_p=0.95, top_k=40). On the PaLM documentation it says blocked reason other 2 is an unspecified filter, and I don't know what that would be. I am using palm.chat with this example: [ "what is your name", "my name is al" ]. the context is "your name is al." however, when I ask the model "what is your name", it gives me that error immediately.

markmcd commented 1 year ago

Weird indeed. Let me look into this.

keertk commented 1 year ago

Internal: b/287431331

Django-Jiang commented 1 year ago

I have kind of similar issue with palm text model. Any fix for this problem?

keertk commented 1 year ago

@Django-Jiang can you share more details about the request you're sending please? Context, examples, output, etc.

markmcd commented 1 year ago

@keertk - this was filed internally as b/296818753, I captured the relevant info in there.

(Edit: ah we have two now, I'll de-dupe them. Sorry for the noise folks)

blevlabs commented 1 year ago

I said "thats cool" and I got blocked reason 2

User: how is it going

Completion(candidates=[{'output': 'respond("I am doing well, thank you for asking. How are you doing today?")', 'safety_ratings': [{'category': <HarmCategory.HARM_CATEGORY_DEROGATORY: 1>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}, {'category': <HarmCategory.HARM_CATEGORY_TOXICITY: 2>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}, {'category': <HarmCategory.HARM_CATEGORY_VIOLENCE: 3>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}, {'category': <HarmCategory.HARM_CATEGORY_SEXUAL: 4>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}, {'category': <HarmCategory.HARM_CATEGORY_MEDICAL: 5>, 'probability': <HarmProbability.LOW: 2>}, {'category': <HarmCategory.HARM_CATEGORY_DANGEROUS: 6>, 'probability': <HarmProbability.NEGLIGIBLE: 1>}]}], result='respond("I am doing well, thank you for asking. How are you doing today?")', filters=[], safety_feedback=[])

[DEBUG] Model Response: respond("I am doing well, thank you for asking. How are you doing today?")

Agent: I am doing well, thank you for asking. How are you doing today?

User: thats cool

Completion(candidates=[], result=None, filters=[{'reason': <BlockedReason.OTHER: 2>}], safety_feedback=[])

[stream ended here]
valenmoore commented 1 year ago

Following up to this issue. I am no longer having this problem, and I am not quite sure why it was happening, but here is what I do know. I've messed around with using palm chat in google makersuite. Often times, if there are not adequate examples or the chatbot does not come up with a response, makersuite will display an empty bubble with the error message "None". From what I can tell "<BlockedReason.OTHER: 2>" is the equivalent of this error. This means that the server returned an empty response, and did not know what to make of your input. With that said, I have no idea why this error occurs for simple prompts. I have found ways around this by rephrasing prompts, as often, palm gets stuck on some phrasing. For example, the problem I initially had: palm gave no response to "what is your name" despite repeated training. If you change the wording to "what are you called", palm gave the name with no problem. My best guess is simply that palm is just not smart enough sometimes, but I don't know further than that. Sorry for the non answer, and I hope your problem is resolved soon.

markmcd commented 1 year ago

Thanks for the feedback @vayvaychicken - you are right that this issue specifically is affected by the literal query what is your name, so if that's important to folks, for now you need to reword it slightly.

jcuenod commented 10 months ago

I'm getting {"reason":"OTHER"} (which I assume is the same thing) using the node api. I am using text-bison-001 to generate summaries with a prompt I've been iterating on. I just encountered this error with a piece of content for summary that has nothing noteworthy about it.

dzlab commented 9 months ago

Thanks for the feedback @vayvaychicken - you are right that this issue specifically is affected by the literal query what is your name, so if that's important to folks, for now you need to reword it slightly.

@markmcd rewording may not be possible all the time though

miRx923 commented 6 months ago

I'm here to tell you how I avoided the "block_reason: OTHER" error.

I have no clue why the error occurs and what does it mean, but if you're fine with just skipping the prompt that causes the error, there is a fix for it. I'm analysing sentiment of the rewievs in a .csv file and if the "block_reason: OTHER" occurs I just return "incorrect" instead of "positive" or "negative". If this method doesn't fit in your use case, you can try to adjust it.

Python code:

    safety_settings=[
    {
        "category": "HARM_CATEGORY_HARASSMENT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_HATE_SPEECH",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
        "threshold": "BLOCK_NONE",
    },
    {
        "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
        "threshold": "BLOCK_NONE",
    },
    ]

    model = genai.GenerativeModel("gemini-pro", generation_config=generation_config, safety_settings=safety_settings)
    chat = model.start_chat(history=[])

      try:
          response = chat.send_message(prompt)

          if response:
              output = response.text
              sentiment = output.strip()

      # Handle the blocked prompt exception
      except genai.types.generation_types.BlockedPromptException as e:
          print(f"Prompt blocked due to: {e}")

          return "incorrect"

      time.sleep(0.25) # you can skip this line

      return sentiment

This way, when I encounter the error it just returns "incorrect" and continues with the next prompt. Hope I helped. ❤️

MarkDaoust commented 4 months ago

The internal Eng team has made some significant improvements since this was reported. block_reason.OTHER is still a problem but has since been improved. It no longer blocks "who are you".

Let's close this one and reopen a new one if there are fresh details.