google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.19k stars 227 forks source link

stop_resaon is alwasy STOP, and usage_meta is consistently empty #351

Open Andy963 opened 1 month ago

Andy963 commented 1 month ago

Description of the bug:

version: 0.54 python 3.9

The stop reason is always "STOP", never change, and usage_metadata is always empty

Actual vs expected behavior:

 async for item in resp:
            c = item.candidates
            print("stop_reason", c[0].finish_reason, f"usage:{item.usage_metadata}:end")

output:

stop_reason FinishReason.STOP usage::end # usage is empty
stop_reason FinishReason.STOP usage::end
stop_reason FinishReason.STOP usage::end
answer is: I am a large language model, trained by Google. I do not have a name. 

input_tokens 5  # this is counted by await client.count_tokens_async not from usage_metadata
output_tokens 20

the first stop reason should be empty or None or start, but not stop, and usage_metadata contains the prompt_tokens, output_tokens, but it's empty,

if it's non stream mode, the "token_count" field is always 0,

GenerateContentResponse(
    done=True,
    iterator=None,
    result=glm.GenerateContentResponse({
      "candidates": [
        {
          "content": {
            "parts": [
              {
                "text": "Hello! \ud83d\udc4b  How can I help you today? \ud83d\ude0a \n"
              }
            ],
            "role": "model"
          },
          "finish_reason": 1,
          "index": 0,
          "safety_ratings": [
            {
              "category": 9,
              "probability": 1,
              "blocked": false
            },
            {
              "category": 8,
              "probability": 1,
              "blocked": false
            },
            {
              "category": 7,
              "probability": 1,
              "blocked": false
            },
            {
              "category": 10,
              "probability": 1,
              "blocked": false
            }
          ],
          "token_count": 0,  # this field is alway 0
          "grounding_attributions": []
        }
      ]
    }),
)
answer is: Hello! 👋  How can I help you today? 😊 

input_tokens 1 # this is count by client.count_tokens not from usage_metadata
output_tokens 14

Was this behavior caused by a bug, or is the server not yet equipped to handle this?

Any other information you'd like to share?

package version: 0.5.4 python 3.9

GMC-Nickies commented 1 month ago

I am also getting the issue of usage_metadata always being empty, even when running Gemini Cookbook notebook code directly