google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.56k stars 311 forks source link

“1‘ is blocked by safety reason? seriously #126

Closed Andy963 closed 10 months ago

Andy963 commented 10 months ago

Description of the bug:

when i send 1 to the gemini pro, it raise a exception : ValueError: The response.parts quick accessor only works for a single candidate, but none were returned. Check the response.prompt_feedback to see if the prompt was blocked.

then i print the response.prommpt_feedback, i got this:

block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: LOW } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: MEDIUM } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE } them i go to studio :

image

Actual vs expected behavior:

so anybody tell what happend? No response

Any other information you'd like to share?

No response

DiamondGo commented 10 months ago

The same here. lmfao.

API level censorship is stupid.

alexmavr commented 10 months ago

Same here, I'm running the API over an eval dataset and it won't finish the full dataset run. If this is rate limiting then at least clearly say so

LaMerdaSeca commented 10 months ago

I have the same error in Java: The response is blocked due to safety reason even in my settings I have:

setSafetySettings(Collections.singletonList(
                    SafetySetting.newBuilder()
                            .setThreshold(SafetySetting
                                    .HarmBlockThreshold.BLOCK_NONE).build()));
elavalasrinivasreddy commented 10 months ago

Same here. I got a response for below numbers, for remaining numbers same error. 0 : Got the defination of Zero 6 : Got the explanation of Hexagone 10 : Got these subpoints

  1. Computer Programming
  2. Machine Learning
  3. Data Analysis
  4. Web Development
  5. Digital Marketing
  6. Graphic Design
  7. Video Editing
  8. 3D Modeling and Animation
  9. Photography
  10. Music Production
Roviky commented 10 months ago

I meet this problem too.How can i run it again?

HienBM commented 10 months ago

Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that.

block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }

jacklanda commented 10 months ago

Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that.

block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }

Where should I set up this argument?

HienBM commented 10 months ago

Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that. block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }

Where should I set up this argument?

This is my set up

Screenshot 2024-01-14 075915

jacklanda commented 10 months ago

Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that. block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }

Where should I set up this argument?

This is my set up

Screenshot 2024-01-14 075915

It does work. A simple but effective solution to it. Thank you so much!

Even dont know what the consideration (AI Safety Detection?) of Gemini team doing this, but I believe this issue can be closed now. Thanks @HienBM for his useful advise.

teddythinh commented 9 months ago

I'm getting this error while using a for loop to feed the model questions and response answers.

Here is how I configure the model:

generation_config = {
  "candidate_count": 1,
  "max_output_tokens": 256,
  "temperature": 1.0,
  "top_p": 0.7,
}

safety_settings=[
  {
    "category": "HARM_CATEGORY_DANGEROUS",
    "threshold": "BLOCK_NONE",
  },
  {
    "category": "HARM_CATEGORY_HARASSMENT",
    "threshold": "BLOCK_NONE",
  },
  {
    "category": "HARM_CATEGORY_HATE_SPEECH",
    "threshold": "BLOCK_NONE",
  },
  {
    "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
    "threshold": "BLOCK_NONE",
  },
  {
    "category": "HARM_CATEGORY_DANGEROUS_CONTENT",
    "threshold": "BLOCK_NONE",
  },
]

model = genai.GenerativeModel(
    model_name="gemini-pro",
    generation_config=generation_config,
    safety_settings=safety_settings
)

The model returns:

 genai.GenerativeModel(
   model_name='models/gemini-pro',
   generation_config={'candidate_count': 1, 'max_output_tokens': 256, 'temperature': 1.0, 'top_p': 0.7}.
   safety_settings={<HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: 10>: <HarmBlockThreshold.BLOCK_NONE: 4>, <HarmCategory.HARM_CATEGORY_HARASSMENT: 7>: <HarmBlockThreshold.BLOCK_NONE: 4>, <HarmCategory.HARM_CATEGORY_HATE_SPEECH: 8>: <HarmBlockThreshold.BLOCK_NONE: 4>, <HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: 9>: <HarmBlockThreshold.BLOCK_NONE: 4>}
)

And here is how I run it:

response=None
timeout_counter=0
    while response is None and timeout_counter<=30:
        try:
            response = model.generate_content(messages)
        except Exception as msg:
            pprint(msg)
            print('sleeping because of exception ...')
            time.sleep(30)
            continue

    if response==None:
        response_str=""
    else:
        response_str = response.text # <- This line gets error
Marwa-Essam81 commented 9 months ago

I am also getting block_reason: OTHER even through I have set all safety settings to BLOCK_NONE

Yinhance commented 9 months ago

I am also getting block_reason: OTHER even through I have set all safety settings to BLOCK_NONE

Have u solved this promble?

deepak032002 commented 9 months ago

Try this it works for me -

import { GoogleGenerativeAI, HarmCategory, HarmBlockThreshold, } from '@google/generative-ai'; import { ConfigService } from '@nestjs/config';

const config = new ConfigService();

export async function generateText(data: string, type: 'title' | 'content') {

const genAI = new GoogleGenerativeAI(config.get('GOOGLE_GEMINI_API_KEY')); const model = genAI.getGenerativeModel({ model: 'gemini-pro' });

const generationConfig = { temperature: 0.9, topK: 1, topP: 1, maxOutputTokens: 2048, };

const safetySettings = [ { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_NONE, }, ];

const history = [];

const chat = model.startChat({ generationConfig, safetySettings, history, });

let msg: string = "YOUR_MESSAGE";

const result = await chat.sendMessage(msg); const response = result.response; const text = response.text(); return text.replaceAll('\n', '
'); }

gunsterpsp commented 8 months ago

GoogleGenerativeAIResponseError: [GoogleGenerativeAI Error]: Candidate was blocked due to RECITATION at response.text (file:///F:/oliverbackup/flutter_app/backend/node_modules/@google/generative-ai/dist/index.mjs:265:23) at getAIResponse (file:///F:/oliverbackup/flutter_app/backend/controllers/MessagesController.js:148:31) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { response: { candidates: [ [Object] ], promptFeedback: { safetyRatings: [Array] }, text: [Function (anonymous)] } }

how about this one?

shreyash-99 commented 8 months ago

I am also getting block_reason: OTHER even through I have set all safety settings to BLOCK_NONE

Have u solved this promble?

hey, have you figured how to solve this problem?

jacklanda commented 8 months ago

Please consider reopening this closed issue or filing another one for further discussion.

pranavkshirsagar1924 commented 4 months ago

well In my case the problem was due to explicit prompt