Closed Heather95 closed 2 months ago
Hi, I set the safety setting to BLOCK_NONE. Hope that will fix your issue. But I'm not sure if it will work and I don't really know what does the "review process" mean. I guess google will do some extra review afterwards on your api usage with lower safety setting.
By default, the Gemini Pro API has strict safety settings which will block all questionable prompts and give an error before images begin generating. This cannot be disabled on the user end and requires new code, specifically a "BLOCK_NONE" threshold for all 4 safety_ratings category arguments. I don't know how to create Python code myself.
Please refer to the official documentation regarding this issue.
https://ai.google.dev/tutorials/python_quickstart#safety_settings
Full document with code examples at the bottom: https://ai.google.dev/docs/safety_setting_gemini
Note that it states, "Adjusting to lower safety settings will trigger a more indepth review process of your application." I'm not certain what this means.
Gemini Pro API is a great alternative to local LLMs because it allows for 60 queries per minute, i.e. 1 query per second, for free to every user.