Closed skoru01 closed 1 month ago
Hi @skoru01, this error is specifically for responseMIMEType
in GenerationConfig
. It's possible to set temperature
, topK
and topP
with function calling, for example:
let generativeModel = GenerativeModel(
name: "gemini-1.5-flash-latest",
apiKey: apiKey,
generationConfig: GenerationConfig(temperature: 0.0, topP: 0.99, topK: 1),
tools: [Tool(functionDeclarations: [...])]
)
If you'd like to use JSON format responses in some cases, and function calling in others, you could create a separate instance of the model / generation config for the two use cases. Does that answer your question?
Hi @andrewheard Thanks for the suggestion. i will try it. Is it not possible to use responseMIMEType
along with function calling? function calling is giving more accurate options and i want json as well.
i am using gemini-1.5-flash-latest
and sometimes i am getting pretty python or json string instead of JSON. Where as gemini-1.5-pro-latest
really not giving accurate result as like gemini-1.5-flash-latest
@skoru01 Would it be possible for you to describe your use case of function calling in more detail? Potentially you could add an additional function for your final response and use the function calling mode ANY
. This way it would never produce a natural-language/text response. In Swift the setup would look something like:
let model = GenerativeModel(
...,
tools: [Tool(functionDeclarations: [...])],
toolConfig: ToolConfig(functionCallingConfig: FunctionCallingConfig(mode: .any))
)
@andrewheard Thanks you, i will check this and get back to you if its not working.
Hi! I'm looking an issue over in genkit and I think it's related/the same problem, but in Node.js. We're setting responseMimeType as 'application/json' and also adding tools in a chat request. Is this not possible then?
@andrewheard
Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.
@skoru01 Would it be possible for you to describe your use case of function calling in more detail? Potentially you could add an additional function for your final response and use the function calling mode
ANY
. This way it would never produce a natural-language/text response. In Swift the setup would look something like:let model = GenerativeModel( ..., tools: [Tool(functionDeclarations: [...])], toolConfig: ToolConfig(functionCallingConfig: FunctionCallingConfig(mode: .any)) )
Hi @andrewheard Sorry for the late response. GenerationConfig working good. but below code causing issue
let model = GenerativeModel(
...,
tools: [Tool(functionDeclarations: [...])],
toolConfig: ToolConfig(functionCallingConfig: FunctionCallingConfig(mode: .any))
)
Response
{
"error": {
"code": 500,
"message": "An internal error has occurred. Please retry or report in https://developers.generativeai.google/guide/troubleshooting",
"status": "INTERNAL"
}
}
FYI, if send .auto over FunctionCallingConfig, its working fine.
Marking this issue as stale since it has been open for 14 days with no activity. This issue will be closed if no further activity occurs.
This issue was closed because it has been inactive for 28 days. Please post a new issue if you need further assistance. Thanks!
Description of the bug:
Hi, i am using Function calling along with generationConfig and then i am getting below error message.
How can we use function calling tools along with GenerationConfig in this SDK, since i want to add temperature, topK and topP to function calling for more accuracy response.
Request:
Actual vs expected behavior:
No response
Any other information you'd like to share?
No response