Open HienBM opened 9 months ago
me too i have the same error especially with Gemini vision
You could try to delete max_output_tokens
generation_config in the model if you use that
You could try to delete the generation_config in the model if you use that
It works for me. Thanks @ydm20231608
You could try to delete the generation_config in the model if you use that
I also encountered this problem. Where is the generation_config file that needs to be deleted? @HienBM
Hi @Ki-Zhang ,
When you set up your model, the generation_config is in it. Try ignoring it like this.
Thank you for your answer @HienBM
But I just used model = genai.GenerativeModel('gemini-pro-vision')
to simply set up the genimi-pro-vision model. And I encountered the same problem.
Cell In[31], line 23, in dectect_object_why(ori_img, head_pos, gaze_pos)
8 response = model.generate_content(
9 [
10 "The person outlined in the blue frame is looking at what object is marked with the red circle in the picture?",
(...)
20 stream=True
21 )
22 response.resolve()
---> 23 to_markdown(response.text)
24 return response.text
File ~/miniconda3/envs/gemini/lib/python3.9/site-packages/google/generativeai/types/generation_types.py:328, in BaseGenerateContentResponse.text(self)
326 parts = self.parts
327 if len(parts) != 1 or "text" not in parts[0]:
--> 328 raise ValueError(
329 "The `response.text` quick accessor only works for "
330 "simple (single-`Part`) text responses. This response is not simple text."
331 "Use the `result.parts` accessor or the full "
332 "`result.candidates[index].content.parts` lookup "
333 "instead."
334 )
335 return parts[0].text
ValueError: The `response.text` quick accessor only works for simple (single-`Part`) text responses. This response is not simple text.Use the `result.parts` accessor or the full `result.candidates[index].content.parts` lookup instead.
I don't know how to solve this problem. But this problem does not occur when I use other image examples to input the model.
@Ki-Zhang As of January 2024, the entire list of Harm Categories can be found here. The implementation for gemini-pro
or gemini-pro-vision
can be carried out as follows in Python:
safety_settings = [
{
"category": "HARM_CATEGORY_DANGEROUS",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HARASSMENT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"threshold": "BLOCK_NONE",
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"threshold": "BLOCK_NONE",
},
]
values for each category can be found here
Threshold (Google AI Studio) | Threshold (API) | Description |
---|---|---|
Block none | BLOCK_NONE | Always show regardless of probability of unsafe content |
Block few | BLOCK_ONLY_HIGH | Block when high probability of unsafe content |
Block some | BLOCK_MEDIUM_AND_ABOVE | Block when medium or high probability of unsafe content |
Block most | BLOCK_LOW_AND_ABOVE | Block when low, medium, or high probability of unsafe content |
HARM_BLOCK_THRESHOLD_UNSPECIFIED | Threshold is unspecified, block using default threshold |
These settings can be applied as:
# For image model
image_model.generate_content([your_image, prompt], safety_settings=safety_settings)
# For text model
text_model.generate_content(prompt, safety_settings=safety_settings)
Additionally, make sure the image does not contain content related to openAI
or chatgpt
. Otherwise, it may result in an error. Screenshots taken through the default Snipping Tool
on Windows might also lead to such errors.
So this is caused because content was blocked on the server-side? If so, the thrown exception text is terrible.
@Ki-Zhang As of January 2024, the entire list of Harm Categories can be found here. The implementation for
gemini-pro
orgemini-pro-vision
can be carried out as follows in Python:safety_settings = [ { "category": "HARM_CATEGORY_DANGEROUS", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE", }, ]
values for each category can be found here
Threshold (Google AI Studio) Threshold (API) Description Block none BLOCK_NONE Always show regardless of probability of unsafe content Block few BLOCK_ONLY_HIGH Block when high probability of unsafe content Block some BLOCK_MEDIUM_AND_ABOVE Block when medium or high probability of unsafe content Block most BLOCK_LOW_AND_ABOVE Block when low, medium, or high probability of unsafe content HARM_BLOCK_THRESHOLD_UNSPECIFIED Threshold is unspecified, block using default threshold These settings can be applied as:
# For image model image_model.generate_content([your_image, prompt], safety_settings=safety_settings) # For text model text_model.generate_content(prompt, safety_settings=safety_settings)
Additionally, make sure the image does not contain content related to
openAI
orchatgpt
. Otherwise, it may result in an error. Screenshots taken through the defaultSnipping Tool
on Windows might also lead to such errors.
Thanks for provding this! However, the safety settings does not work for me, instead, changing temperature from 0 to 0.7 works. The generated contents may be blocked since I found my input question is about black people (from MMLU dataset).
@Ki-Zhang As of January 2024, the entire list of Harm Categories can be found here. The implementation for
gemini-pro
orgemini-pro-vision
can be carried out as follows in Python:safety_settings = [ { "category": "HARM_CATEGORY_DANGEROUS", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE", }, ]
values for each category can be found here
Threshold (Google AI Studio) Threshold (API) Description Block none BLOCK_NONE Always show regardless of probability of unsafe content Block few BLOCK_ONLY_HIGH Block when high probability of unsafe content Block some BLOCK_MEDIUM_AND_ABOVE Block when medium or high probability of unsafe content Block most BLOCK_LOW_AND_ABOVE Block when low, medium, or high probability of unsafe content HARM_BLOCK_THRESHOLD_UNSPECIFIED Threshold is unspecified, block using default threshold These settings can be applied as:
# For image model image_model.generate_content([your_image, prompt], safety_settings=safety_settings) # For text model text_model.generate_content(prompt, safety_settings=safety_settings)
Additionally, make sure the image does not contain content related to
openAI
orchatgpt
. Otherwise, it may result in an error. Screenshots taken through the defaultSnipping Tool
on Windows might also lead to such errors.
Thanks for the information. However, both the safety settings and temperature don't work for me. The contents are in the biomedical domain, and all other cases are successfully generated except for one. I don't know why it fails to generate for that one.
Try set up
for candidate in response.candidates:
return [part.text for part in candidate.content.parts]
instead of
response.text
It worked for me
This is probably happening because you are getting a finish_reason of Recitation for your chosen candidate:
finish_reason
// This field may be populated with recitation information for any text
// included in the content
. These are passages that are "recited" from
// copyrighted material in the foundational LLM's training data.
So you may need to simply choose another candidate or run again for a new result that doesn't infringe on copyright.
You could try to delete the generation_config in the model if you use that
This also works for me but why may be the reason for that?
You could try to delete the generation_config in the model if you use that
This also works for me but why may be the reason for that?
have the same issue and I wonder what is the reason?
This problem happened when ```max_output_tokens''' was too small for the response so no need to delete generation_config. That my experience
I think the main reason is the model doesn't return any text in sometimes, based on the explanation of @MarkDaoust in https://github.com/google/generative-ai-python/issues/196#issuecomment-1930503073
I still get this error with my code even when I delete generation_config.
But when I set up
for candidate in response.candidates:
return [part.text for part in candidate.content.parts]
instead of
response.text
This error does not appear anymore.
@HienBM, Thank you for this information. It resolved my errors.
ValueError: The response.text quick accessor only works for simple (single-Part) text responses. This response is not simple text.Use the result.parts accessor or the full result.candidates[index].content.parts lookup instead.
Fix the Error Anyone check for You Code it's working for me 😎 when you work on only text
*model = genai.GenerativeModel('gemini-pro')
prompt = "What is the meaning of life?"
response = model.generate_content(prompt)
specific_answer = response.candidates[0].content.parts[0].text
print(specific_answer)*
and when work with image
response = model.generate_content(img)
try:
# Check if 'candidates' list is not empty
if response.candidates:
# Access the first candidate's content if available
if response.candidates[0].content.parts:
generated_text = response.candidates[0].content.parts[0].text
print("Generated Text:", generated_text)
else:
print("No generated text found in the candidate.")
else:
print("No candidates found in the response.")
except (AttributeError, IndexError) as e:
print("Error:", e)
You could try to delete
max_output_tokens
generation_config in the model if you use that
thankyou very much
Maybe it's because multiple results are generated
When there is only one result, it is no problem to directly
respond.text
But if there are multiple results, an error will be reported. Then you need to use the first result by default.
That is
response.candidates[0].content.parts[0].text
Maybe it's because multiple results are generated
res.candidates[0].content.parts
was an empty list in my case, and res.candidates[0].finish_reason
was MAX_TOKENS
.
My current diagnosis of this issue is that max_output_tokens
is treated oddly by this generative-ai-python SDK. It appears that max_output_tokens
is an absolute upper limit for a reply, and when this limit is reached, the SDK returns an empty reply with finish_reason=MAX_TOKENS
. This is confirmed by the doc string for the error code:
MAX_TOKENS (2): The maximum number of tokens as specified in the request was reached.
So a lower max_output_tokens
value will result in no text and a MAX_TOKENS error. This is different to the behaviour that most people expect, which would be to return some text when the upper limit is hit. e.g. in Vertex AI, max_output_tokens
is interpreted as a length modifier, where a lower value results in shorter text responses (not no text):
MAX_OUTPUT_TOKENS: Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words. Specify a lower value for shorter responses and a higher value for potentially longer responses.
del('gemini-pro-vision'
Thank you for your answer @HienBM But I just used
model = genai.GenerativeModel('gemini-pro-vision')
to simply set up the genimi-pro-vision model. And I encountered the same problem.Cell In[31], line 23, in dectect_object_why(ori_img, head_pos, gaze_pos) 8 response = model.generate_content( 9 [ 10 "The person outlined in the blue frame is looking at what object is marked with the red circle in the picture?", (...) 20 stream=True 21 ) 22 response.resolve() ---> 23 to_markdown(response.text) 24 return response.text File ~/miniconda3/envs/gemini/lib/python3.9/site-packages/google/generativeai/types/generation_types.py:328, in BaseGenerateContentResponse.text(self) 326 parts = self.parts 327 if len(parts) != 1 or "text" not in parts[0]: --> 328 raise ValueError( 329 "The `response.text` quick accessor only works for " 330 "simple (single-`Part`) text responses. This response is not simple text." 331 "Use the `result.parts` accessor or the full " 332 "`result.candidates[index].content.parts` lookup " 333 "instead." 334 ) 335 return parts[0].text ValueError: The `response.text` quick accessor only works for simple (single-`Part`) text responses. This response is not simple text.Use the `result.parts` accessor or the full `result.candidates[index].content.parts` lookup instead.
I don't know how to solve this problem. But this problem does not occur when I use other image examples to input the model.
@Ki-Zhang
The problem occurs not only with the image but also the prompt. I tried same image with different prompt and it works. If you do not want to change the prompt then block the safety settings like below:
def get_gemini_response(input, image): model = genai.GenerativeModel('gemini-pro-vision') safe = [ { "category": "HARM_CATEGORY_DANGEROUS", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE", }, { "category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE", }, ] if input != "": response = model.generate_content([input, image], safety_settings=safe) else: response = model.generate_content(image) return response.text
I hope it definitely works. Thank you!
You could try to delete
max_output_tokens
generation_config in the model if you use that
It worked for me.
nothing works for me, my content is medical and it considers it unsafe ! anyone found a solution ?
nothing works for me, my content is medical and it considers it unsafe ! anyone found a solution ?
I have not tried it yet, but I have heard that it might work. Try using some prompting techniques to extract the response, like giving the context before the question, like Instruction = f"You are the backend of a medical chatbot, be responsible and respond in formal tone. The users of this medical chatbot are mature and intelligent students and doctors, So respond according to the query {query}"
I am facing this issue how to resolve it
Invalid operation: The response.parts
quick accessor requires a single candidate, but none were returned. Please check the response.prompt_feedback
to determine if the prompt was blocked.
@Vital1162
This problem happened when ```max_output_tokens''' was too small for the response so no need to delete generation_config. That my experience
This has been fixed, now it does return the partial text.
@HienBM
Try set up
Yes, this is what the error message is trying to tell you. That something went wrong and it couldn't get a simple text result.
The one downside to this approach is that empty responses become an empty string, when maybe you want to check what went wrong, and/or re-run the request.
@Pratik-Kumar-Cse
This will update the error messages to output more information: https://github.com/google-gemini/generative-ai-python/pull/527. The right fix depends on what is going wrong.
Description of the bug:
Can someone help me check this error? I still ran successfully yesterday with the same code
File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/core/series.py:4630, in Series.apply(self, func, convert_dtype, args, kwargs) 4520 def apply( 4521 self, 4522 func: AggFuncType, (...) 4525 kwargs, 4526 ) -> DataFrame | Series: 4527 """ 4528 Invoke function on values of Series. 4529 (...) 4628 dtype: float64 4629 """ -> 4630 return SeriesApply(self, func, convert_dtype, args, kwargs).apply()
File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/core/apply.py:1025, in SeriesApply.apply(self) 1022 return self.apply_str() 1024 # self.f is Callable -> 1025 return self.apply_standard()
File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/core/apply.py:1076, in SeriesApply.apply_standard(self) 1074 else: 1075 values = obj.astype(object)._values -> 1076 mapped = lib.map_infer( 1077 values, 1078 f, 1079 convert=self.convert_dtype, 1080 ) 1082 if len(mapped) and isinstance(mapped[0], ABCSeries): 1083 # GH#43986 Need to do list(mapped) in order to get treated as nested 1084 # See also GH#25959 regarding EA support 1085 return obj._constructor_expanddim(list(mapped), index=obj.index)
File ~/cluster-env/trident_env/lib/python3.10/site-packages/pandas/_libs/lib.pyx:2834, in pandas._libs.lib.map_infer()
Cell In[116], line 82, in extract_absa_with_few_shot_gemini(text) 80 response.resolve() 81 time.sleep(1) ---> 82 return list_of_dict_to_string(string_to_list_dict(response.text.lower()))
File ~/cluster-env/trident_env/lib/python3.10/site-packages/google/generativeai/types/generation_types.py:328, in BaseGenerateContentResponse.text(self) 326 parts = self.parts 327 if len(parts) != 1 or "text" not in parts[0]: --> 328 raise ValueError( 329 "The
response.text
quick accessor only works for " 330 "simple (single-Part
) text responses. This response is not simple text." 331 "Use theresult.parts
accessor or the full " 332 "result.candidates[index].content.parts
lookup " 333 "instead." 334 ) 335 return parts[0].textValueError: The
response.text
quick accessor only works for simple (single-Part
) text responses. This response is not simple text.Use theresult.parts
accessor or the fullresult.candidates[index].content.parts
lookup instead.Actual vs expected behavior:
No response
Any other information you'd like to share?
No response