Open AamodThakur opened 10 months ago
I'm also having the same error.
Encountered this error while passing 4 images with prompt input to the image model gemini-pro-vision.
Then I tried using result.parts
, as suggested by Gemini
It returned an empy list []
Right, sometimes the model doesn't return any text.
result.parts
or result.candidates
will show you what it did return.
You can check response.prompt_feedback
to see if it had a problem with the prompt, and look at the additional fields on the Candidate
- result.candidates[index]
to see the finish reason and a few other things.
Maybe the error message should say that.
Same problem. How can I wait for the response to complete?
Right, sometimes the model doesn't return any text.
result.parts
orresult.candidates
will show you what it did return.You can check
response.prompt_feedback
to see if it had a problem with the prompt, and look at the additional fields on theCandidate
-result.candidates[index]
to see the finish reason and a few other things.Maybe the error message should say that.
I used response.prompt_feedback
, it didn't mention any warning message, all text was neutral and safe.
Then with result.candidates
, the reason was mentioned as finish_reason: other
.
See #170 Maybe it will help you
See #170 Maybe it will help you
Yes, I had tried techniques in that issue too, but it didn't solve the error.
1] Altered safety_settings
, still same error
2] Increased max_output_tokens
to 10000 tokens, still same error
Would really like this to be fixed or properly explained... For me it works if the max_token_limit = 2048. Any less causes this error, which is absurd. What if I want a shorter answer?
one of the situation was : the ai generated an answer which is proteced by citation, so, the candidates will be like this:
candidates [index: 0 finish_reason: RECITATION ] (maybe finish_reason: Other, as @Yusuf80216 mentioned)
and the parts obj is an empty list, so i guess the ai generated an answer, but the answer is filter by some citation rules, and we get this err, r.However, in BaseGenerateContentResponse
we can only get one candidate (or will raise ValueError
).
so maybe it's better to return a candidate which is stopped with: FinishReason.STOP
, @MarkDaoust
for text
try:
# Access the 'candidates' attribute from the response
candidates = response.candidates
# Assuming you want to access the first candidate's content
generated_text = candidates[0].content.parts[0].text
print("Generated Text:", generated_text)
except (AttributeError, IndexError) as e:
print("Error:", e)
used for image
response = model.generate_content(img)
try:
# Check if 'candidates' list is not empty
if response.candidates:
# Access the first candidate's content if available
if response.candidates[0].content.parts:
generated_text = response.candidates[0].content.parts[0].text
print("Generated Text:", generated_text)
else:
print("No generated text found in the candidate.")
else:
print("No candidates found in the response.")
except (AttributeError, IndexError) as e:
print("Error:", e)
The situation is quite perplexing. I'm experiencing similar issues, regardless of the maximum token count or the safety settings adjustment. Specifically, when I input a shorter text, approximately 5,000 characters or less, the system operates smoothly.
However, once the input exceeds 10,000 characters, the problems reemerge. Despite Google's claims that Gemini Pro can accommodate up to 30,720 input tokens, the reality falls short. In numerous instances, the model struggles to process even 7,000 tokens.
When sending larger text you recive either esponse.text Error or HTTPConnectionPool(host='localhost', port=40843): Read timed out. (read timeout=60.0)
Same problem, chat.send_message (with stream flag) also sending the same exception, but only occasionally. If the response comes back in different streaming chunks, some of them come back just fine before the exception. Seems like a bug on the Google endpoint.
Use the following code to extract the response, You cannot access the text directly from the response
all_responses = []
for response in responses:
for part in response.parts:
if part.text:
all_responses.append(part.text)
We are getting error after getting response from gemini.
Error: "The
response.text
quick accessor only works for simple (single-Part
) text responses. This response is not simple text.Use theresult.parts
accessor or the fullresult.candidates[index].content.parts
lookup instead."Code: ` genai.configure(api_key = api_key) model = genai.GenerativeModel('gemini-pro')
response = model.generate_content(final_msg[l]) tmp.append(response.text) ### <-- Getting Error Here`
It is getting error for multiple text inputs which contains mathematical tokens(like lambda, pi, alpha, beta ...).
Thanks
Right, there are a few ways that it can fail top return text. Look at result.prompt_feedback
and result.candidates[0]
, they will give youy mopre information about what went wrong.
We are getting error after getting response from gemini.
Error: "The
response.text
quick accessor only works for simple (single-Part
) text responses. This response is not simple text.Use theresult.parts
accessor or the fullresult.candidates[index].content.parts
lookup instead."Code: ` genai.configure(api_key = api_key) model = genai.GenerativeModel('gemini-pro')
response = model.generate_content(final_msg[l]) tmp.append(response.text) ### <-- Getting Error Here`
It is getting error for multiple text inputs which contains mathematical tokens(like lambda, pi, alpha, beta ...).
Thanks