Closed Andy963 closed 10 months ago
The same here. lmfao.
API level censorship is stupid.
Same here, I'm running the API over an eval dataset and it won't finish the full dataset run. If this is rate limiting then at least clearly say so
I have the same error in Java: The response is blocked due to safety reason even in my settings I have:
setSafetySettings(Collections.singletonList(
SafetySetting.newBuilder()
.setThreshold(SafetySetting
.HarmBlockThreshold.BLOCK_NONE).build()));
Same here. I got a response for below numbers, for remaining numbers same error. 0 : Got the defination of Zero 6 : Got the explanation of Hexagone 10 : Got these subpoints
- Computer Programming
- Machine Learning
- Data Analysis
- Web Development
- Digital Marketing
- Graphic Design
- Video Editing
- 3D Modeling and Animation
- Photography
- Music Production
I meet this problem too.How can i run it again?
Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that.
block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }
Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that.
block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }
Where should I set up this argument?
Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that. block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }
Where should I set up this argument?
This is my set up
Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that. block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }
Where should I set up this argument?
This is my set up
It does work. A simple but effective solution to it. Thank you so much!
Even dont know what the consideration (AI Safety Detection?) of Gemini team doing this, but I believe this issue can be closed now. Thanks @HienBM for his useful advise.
I'm getting this error while using a for loop to feed the model questions and response answers.
Here is how I configure the model:
generation_config = {
"candidate_count": 1,
"max_output_tokens": 256,
"temperature": 1.0,
"top_p": 0.7,
}
safety_settings=[
{
"category": "HARM_CATEGORY_DANGEROUS",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"threshold": "BLOCK_NONE",
},
]
model = genai.GenerativeModel(
model_name="gemini-pro",
generation_config=generation_config,
safety_settings=safety_settings
)
The model returns:
genai.GenerativeModel(
model_name='models/gemini-pro',
generation_config={'candidate_count': 1, 'max_output_tokens': 256, 'temperature': 1.0, 'top_p': 0.7}.
safety_settings={<HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: 10>: <HarmBlockThreshold.BLOCK_NONE: 4>, <HarmCategory.HARM_CATEGORY_HARASSMENT: 7>: <HarmBlockThreshold.BLOCK_NONE: 4>, <HarmCategory.HARM_CATEGORY_HATE_SPEECH: 8>: <HarmBlockThreshold.BLOCK_NONE: 4>, <HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: 9>: <HarmBlockThreshold.BLOCK_NONE: 4>}
)
And here is how I run it:
response=None
timeout_counter=0
while response is None and timeout_counter<=30:
try:
response = model.generate_content(messages)
except Exception as msg:
pprint(msg)
print('sleeping because of exception ...')
time.sleep(30)
continue
if response==None:
response_str=""
else:
response_str = response.text # <- This line gets error
I am also getting block_reason: OTHER even through I have set all safety settings to BLOCK_NONE
I am also getting block_reason: OTHER even through I have set all safety settings to BLOCK_NONE
Have u solved this promble?
Try this it works for me -
import { GoogleGenerativeAI, HarmCategory, HarmBlockThreshold, } from '@google/generative-ai'; import { ConfigService } from '@nestjs/config';
const config = new ConfigService();
export async function generateText(data: string, type: 'title' | 'content') {
const genAI = new GoogleGenerativeAI(config.get('GOOGLE_GEMINI_API_KEY')); const model = genAI.getGenerativeModel({ model: 'gemini-pro' });
const generationConfig = { temperature: 0.9, topK: 1, topP: 1, maxOutputTokens: 2048, };
const safetySettings = [ { category: HarmCategory.HARM_CATEGORY_HARASSMENT, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_HATE_SPEECH, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT, threshold: HarmBlockThreshold.BLOCK_NONE, }, { category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT, threshold: HarmBlockThreshold.BLOCK_NONE, }, ];
const history = [];
const chat = model.startChat({ generationConfig, safetySettings, history, });
let msg: string = "YOUR_MESSAGE";
const result = await chat.sendMessage(msg);
const response = result.response;
const text = response.text();
return text.replaceAll('\n', '
');
}
GoogleGenerativeAIResponseError: [GoogleGenerativeAI Error]: Candidate was blocked due to RECITATION at response.text (file:///F:/oliverbackup/flutter_app/backend/node_modules/@google/generative-ai/dist/index.mjs:265:23) at getAIResponse (file:///F:/oliverbackup/flutter_app/backend/controllers/MessagesController.js:148:31) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { response: { candidates: [ [Object] ], promptFeedback: { safetyRatings: [Array] }, text: [Function (anonymous)] } }
how about this one?
I am also getting block_reason: OTHER even through I have set all safety settings to BLOCK_NONE
Have u solved this promble?
hey, have you figured how to solve this problem?
Please consider reopening this closed issue or filing another one for further discussion.
well In my case the problem was due to explicit prompt
Description of the bug:
when i send 1 to the gemini pro, it raise a exception : ValueError: The
response.parts
quick accessor only works for a single candidate, but none were returned. Check theresponse.prompt_feedback
to see if the prompt was blocked.then i print the response.prommpt_feedback, i got this:
block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: LOW } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: MEDIUM } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE } them i go to studio :
Actual vs expected behavior:
so anybody tell what happend? No response
Any other information you'd like to share?
No response