google-gemini / generative-ai-python

The official Python library for the Google Gemini API
https://pypi.org/project/google-generativeai/
Apache License 2.0
1.65k stars 331 forks source link

Gemini-Pro response.text Error #196

Open AamodThakur opened 10 months ago

AamodThakur commented 10 months ago

We are getting error after getting response from gemini.

Error: "The response.text quick accessor only works for simple (single-Part) text responses. This response is not simple text.Use the result.parts accessor or the full result.candidates[index].content.parts lookup instead."

Code: ` genai.configure(api_key = api_key) model = genai.GenerativeModel('gemini-pro')

response = model.generate_content(final_msg[l]) tmp.append(response.text) ### <-- Getting Error Here`

It is getting error for multiple text inputs which contains mathematical tokens(like lambda, pi, alpha, beta ...).

Thanks

Yusuf80216 commented 9 months ago

I'm also having the same error. Encountered this error while passing 4 images with prompt input to the image model gemini-pro-vision.

Then I tried using result.parts, as suggested by Gemini It returned an empy list []

MarkDaoust commented 9 months ago

Right, sometimes the model doesn't return any text.

result.parts or result.candidates will show you what it did return.

You can check response.prompt_feedback to see if it had a problem with the prompt, and look at the additional fields on the Candidate - result.candidates[index] to see the finish reason and a few other things.

Maybe the error message should say that.

Erickrus commented 9 months ago

Same problem. How can I wait for the response to complete?

Yusuf80216 commented 9 months ago

Right, sometimes the model doesn't return any text.

result.parts or result.candidates will show you what it did return.

You can check response.prompt_feedback to see if it had a problem with the prompt, and look at the additional fields on the Candidate - result.candidates[index] to see the finish reason and a few other things.

Maybe the error message should say that.

I used response.prompt_feedback, it didn't mention any warning message, all text was neutral and safe. Then with result.candidates, the reason was mentioned as finish_reason: other.

HienBM commented 9 months ago

See #170 Maybe it will help you

Yusuf80216 commented 9 months ago

See #170 Maybe it will help you

Yes, I had tried techniques in that issue too, but it didn't solve the error. 1] Altered safety_settings, still same error 2] Increased max_output_tokens to 10000 tokens, still same error

e-ave commented 9 months ago

Would really like this to be fixed or properly explained... For me it works if the max_token_limit = 2048. Any less causes this error, which is absurd. What if I want a shorter answer?

Andy963 commented 8 months ago

one of the situation was : the ai generated an answer which is proteced by citation, so, the candidates will be like this:

candidates [index: 0 finish_reason: RECITATION ] (maybe finish_reason: Other, as @Yusuf80216 mentioned)

and the parts obj is an empty list, so i guess the ai generated an answer, but the answer is filter by some citation rules, and we get this err, r.However, in BaseGenerateContentResponse we can only get one candidate (or will raise ValueError).

so maybe it's better to return a candidate which is stopped with: FinishReason.STOP, @MarkDaoust

codewithdark-git commented 8 months ago

for text

try:
    # Access the 'candidates' attribute from the response
    candidates = response.candidates

    # Assuming you want to access the first candidate's content
    generated_text = candidates[0].content.parts[0].text

    print("Generated Text:", generated_text)
except (AttributeError, IndexError) as e:
    print("Error:", e)

used for image


response = model.generate_content(img)

try:
    # Check if 'candidates' list is not empty
    if response.candidates:
        # Access the first candidate's content if available
        if response.candidates[0].content.parts:
            generated_text = response.candidates[0].content.parts[0].text
            print("Generated Text:", generated_text)
        else:
            print("No generated text found in the candidate.")
    else:
        print("No candidates found in the response.")
except (AttributeError, IndexError) as e:
    print("Error:", e)
seifeur commented 8 months ago

The situation is quite perplexing. I'm experiencing similar issues, regardless of the maximum token count or the safety settings adjustment. Specifically, when I input a shorter text, approximately 5,000 characters or less, the system operates smoothly.

However, once the input exceeds 10,000 characters, the problems reemerge. Despite Google's claims that Gemini Pro can accommodate up to 30,720 input tokens, the reality falls short. In numerous instances, the model struggles to process even 7,000 tokens.

When sending larger text you recive either esponse.text Error or HTTPConnectionPool(host='localhost', port=40843): Read timed out. (read timeout=60.0)

phillipeloher commented 8 months ago

Same problem, chat.send_message (with stream flag) also sending the same exception, but only occasionally. If the response comes back in different streaming chunks, some of them come back just fine before the exception. Seems like a bug on the Google endpoint.

samuelstrike commented 7 months ago

Use the following code to extract the response, You cannot access the text directly from the response

        all_responses = []
        for response in responses:
            for part in response.parts:
                if part.text:
                    all_responses.append(part.text)
Jaymahangacode commented 7 months ago

We are getting error after getting response from gemini.

Error: "The response.text quick accessor only works for simple (single-Part) text responses. This response is not simple text.Use the result.parts accessor or the full result.candidates[index].content.parts lookup instead."

Code: ` genai.configure(api_key = api_key) model = genai.GenerativeModel('gemini-pro')

response = model.generate_content(final_msg[l]) tmp.append(response.text) ### <-- Getting Error Here`

It is getting error for multiple text inputs which contains mathematical tokens(like lambda, pi, alpha, beta ...).

Thanks

MarkDaoust commented 7 months ago

Right, there are a few ways that it can fail top return text. Look at result.prompt_feedback and result.candidates[0], they will give youy mopre information about what went wrong.