microsoft / semantic-kernel

Integrate cutting-edge LLM technology quickly and easily into your apps
https://aka.ms/semantic-kernel
MIT License
21.31k stars 3.13k forks source link

.Net Gemini Connector - GeminiPromptExecutionSettings with multiple GeminiSafetyCategory throw a 400 exception. #6633

Open gonsss opened 3 months ago

gonsss commented 3 months ago

Describe the bug When I specify multiple SafetySettings I get a 400 Bad Request.

To Reproduce Steps to reproduce the behavior:

  1. This code throws an exception
  2. GeminiPromptExecutionSettings proSettings = new () { Temperature=0, TopP = 1, //TopK = 20, MaxTokens = 8102, SafetySettings = [ new(GeminiSafetyCategory.Harassment, GeminiSafetyThreshold.BlockOnlyHigh), new(GeminiSafetyCategory.Dangerous, GeminiSafetyThreshold.BlockOnlyHigh), new(GeminiSafetyCategory.DangerousContent, GeminiSafetyThreshold.BlockOnlyHigh) ], ModelId=geminiProModelId,

}; This happens with Microsoft.SemanticKernel.Connectors.Google, 1.14.1-alpha . If I leave only one, the call works as expected.

Platform

Krzysztof318 commented 2 months ago

This functionality is implemented correctly, but during developing gemini connector I discovered documentation of gemini is often invalid. Maybe you got error due to used two similar categories Dangerous and DangerousContent.

From GEMINI DOCS

This will be enforced on the GenerateContentRequest.contents and GenerateContentResponse.candidates. There should not be more than one setting for each SafetyCategory type. The API will block any contents and responses that fail to meet the thresholds set by these settings. This list overrides the default settings for each SafetyCategory specified in the safetySettings. If there is no SafetySetting for a given SafetyCategory provided in the list, the API will use the default safety setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH, HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT, HARM_CATEGORY_HARASSMENT are supported.