Closed DisboardTetta closed 4 weeks ago
Hello! Simply give format: Gemini.JSON
into the config for (nearly) any Gemini method, and in the output, (which you'll have to filter through yourself 😅) you can find the safety settings.
For the sake of not overcomplicating the library, all of the "advanced" outputs are hidden under Gemini.JSON
and not built-in parseable.
@DisboardTetta I completely misunderstood this request. I will be sure to add this feature later.
Thank you! This is a really useful feature for me as well :-)
This has been implemented in Gemini AI v2.2, which is now available on NPM.
How can I set the safety settings, which are described here https://ai.google.dev/docs/safety_setting_gemini? I need this because Gemini mistakenly marks some texts as harassment, although they are just a description of a scene from a book.