Closed chrbsg closed 2 months ago
Ack - thanks for the report. It's not strictly the Python library, as the API itself is missing the functionality, but I'll bring it up with the team nonetheless. (Googlers: b/292466007)
+1 to this, also having this issue
I found that a PaLM Chat model in Vertex AI SDK supports max_output_tokens.: https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text-chat
The palm based interface has stopped develoipment. Closing.
We want to limit the reply length of chat responses, but google.generativeai.chat does not appear to support the
max_output_tokens
parameter. I'm not sure whether this is just not implemented yet, or an API limitation, or something else, but the vertexai Python SDK Chat model appears to support it (see Vertex AI Chat model parameters) and so does the google.generativeai.generate_text function.I had thought that perhaps
max_output_tokens
wasn't supported in chat, just text generation, but this doc clearly shows it being used in a chat:(It's a bit confusing that Google seems to have two different Python SDKs, this google-generativeai one and google-cloud-aiplatform. Is there any difference if all a developer wants to do is send chat to a model and get responses back?)