Closed isidor131908 closed 6 months ago
Did the changes in PR #343 fix this issue?
getting the same error "ValueError: Content has no parts." on version google-cloud-aiplatform 1.40.0 today
getting the same error "ValueError: Content has no parts." on version google-cloud-aiplatform 1.40.0 today
Same here also. I am not sure why it is randomly generating this error.
All of a sudden I am getting "ValueError: Content has no parts" on same document which worked perfectly fine before.
Getting the same error on version google-cloud-aiplatform==1.41.0
Getting the same error:
File "C:\Users\chris\AppData\Local\Programs\Python\Python310\lib\site-packages\vertexai\generative_models_generative_models.py", line 1299, in text return self.candidates[0].text File "C:\Users\chris\AppData\Local\Programs\Python\Python310\lib\site-packages\vertexai\generative_models_generative_models.py", line 1352, in text return self.content.text File "C:\Users\chris\AppData\Local\Programs\Python\Python310\lib\site-packages\vertexai\generative_models_generative_models.py", line 1409, in text raise ValueError("Content has no parts.") ValueError: Content has no parts.
Same error here when I was trying to extract text
from the response answer
of the model. (answer.text
)
But the problem can be weridly solved by deleting .text
and retyping it 😂
my script:
vertexai.init(project='****', location='us-central1')
gemini_pro_model = GenerativeModel("gemini-pro")
answer = gemini_pro_model.generate_content("Now I am going to give you a molecule in SMILES format, as well as its caption (description), I want you to rewrite the caption into five different versions. SMILES: CN(C(=O)N)N=O, Caption: The molecule is a member of the class of N-nitrosoureas that is urea in which one of the nitrogens is substituted by methyl and nitroso groups. It has a role as a carcinogenic agent, a mutagen, a teratogenic agent and an alkylating agent. Format your output as a python list, for example, you should output something like [\"caption1\", \"caption2\", \"caption3\", \"caption4\", \"caption5\",] Do not use ```python``` in your answer.")
print(answer)
And these are the 2 executions of the script above:
candidates {
content {
role: "model"
}
finish_reason: SAFETY
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: MEDIUM
blocked: true
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
}
usage_metadata {
prompt_token_count: 155
total_token_count: 155
}
candidates {
content {
role: "model"
parts {
text: "[\"This molecule belongs to the N-nitrosoureas class, characterized by a urea structure with one nitrogen substituted by methyl and nitroso groups.\", \"A member of the N-nitrosoureas, the molecule is essentially urea with one of its nitrogens replaced by methyl and nitroso groups.\", \"This N-nitrosourea derivative features a urea core where one nitrogen atom has been replaced with a methyl group and a nitroso group.\", \"The molecule in question is a member of the N-nitrosoureas class, which are urea derivatives with one nitrogen substituted by methyl and nitroso groups.\", \"Belonging to the N-nitrosourea class, this molecule\'s structure resembles urea with one nitrogen being replaced by methyl and nitroso groups.\"]"
}
}
finish_reason: STOP
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: LOW
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
}
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
}
}
usage_metadata {
prompt_token_count: 155
candidates_token_count: 157
total_token_count: 312
}
The reason for the termination of the first execution is HARM_CATEGORY_DANGEROUS_CONTENT
, so that's the reason why we have nothing returned: it got blocked!!
Therefore, you can set your safety configuration to BLOCK_NONE
:
safety_config = {
generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE,
}
answer = gemini_pro_model.generate_content("Now I am going to give you a molecule in SMILES format, as well as its caption (description), I want you to rewrite the caption into five different versions. SMILES: CN(C(=O)N)N=O, Caption: The molecule is a member of the class of N-nitrosoureas that is urea in which one of the nitrogens is substituted by methyl and nitroso groups. It has a role as a carcinogenic agent, a mutagen, a teratogenic agent and an alkylating agent. Format your output as a python list, for example, you should output something like [\"caption1\", \"caption2\", \"caption3\", \"caption4\", \"caption5\",] Do not use ```python``` in your answer.", safety_settings=safety_config)
You will not have this problem anymore :)
https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/configure-safety-attributes
I tried catching the ValueError: Content has no parts.
by adding this:
try:
prediction = model.generate_content(prompt,
generation_config=GENERATION_CONFIG,
)
logging.info(f"Prediction_text: {prediction.text}")
return prediction
except ValueError as e:
logging.error(f"Something went wrong with the API call: {e}")
# If the response doesn't contain text, check if the prompt was blocked.
logging.error(prediction.prompt_feedback)
# Also check the finish reason to see if the response was blocked.
logging.error(prediction.candidates[0].finish_reason)
# If the finish reason was SAFETY, the safety ratings have more details.
logging.error(prediction.candidates[0].safety_ratings)
raise Exception(f"Something went wrong with the API call: {e}")
but this gave me:
AttributeError: 'GenerationResponse' object has no attribute 'prompt_feedback'
The issue here is that Gemini is blocking some of the text. It can be offset by setting the safety thresholds to None, as pointed out by @TTTTao725. Working on a PR to fix this issue.
If anyone wants to unblock themselves before we raise the PR, you can change the following function [get_gemini_response] in the utils here:
gemini/use-cases/retrieval-augmented-generation/utils/intro_multimodal_rag_utils.py
This will keep the safety settings to the lowest: None, and hence will not block anything. More info here: configure-safety-attributes
Code:
def get_gemini_response(
generative_multimodal_model,
model_input: List[str],
stream: bool = True,
generation_config: Optional[dict] = {"max_output_tokens": 2048, "temperature": 0.2},
safety_settings: Optional[dict] = {
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
},
) -> str:
"""
This function generates text in response to a list of model inputs.
Args:
model_input: A list of strings representing the inputs to the model.
stream: Whether to generate the response in a streaming fashion (returning chunks of text at a time) or all at once. Defaults to False.
Returns:
The generated text as a string.
"""
# generation_config = {"max_output_tokens": 2048, "temperature": 0.1}
# print(generation_config)
# print(safety_settings)
if stream:
response = generative_multimodal_model.generate_content(
model_input,
generation_config=generation_config,
stream=stream,
safety_settings=safety_settings,
)
response_list = []
for chunk in response:
try:
response_list.append(chunk.text)
except Exception as e:
print("Exception occured while calling gemini. Something is wrong. Lower the safety thresholds [safety_settings: BLOCK_NONE ] if not already done. -----", e)
response_list.append("Exception occured")
continue
response = "".join(response_list)
else:
response = generative_multimodal_model.generate_content(
model_input, generation_config=generation_config
)
response = response.candidates[0].content.parts[0].text
return response
Let me know if this resolves the issue: "ValueError: Content has no parts."
@lavinigam-gcp, there could be something else at play here because even with safety_settings set to BLOCK_NONE
, I get FinishReason.OTHER
as a response with:
response.candidates[0].finish_reason
Thanks for testing it out, Harsh. Would it be possible for you to share the document you are working on? and can you share the complete trace of the error?
Hey @lavinigam-gcp, what's the best way of sharing the .log file? I will have to remove the sensitive details from it.
Are they any other attributes from response.candidates[0]
that can be helpful?
Did some more digging and found that the response is kind of empty:
2024-02-15 15:53:16 - INFO - Prediction: candidates {
content {
role: "model"
}
finish_reason: OTHER
}
usage_metadata {
prompt_token_count: 356
total_token_count: 356
}
I added safety_settings but the problem is still there.
safety_settings: Optional[dict] = {
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
},
hasattr(response,"text")
ValueError response:
candidates {
content {
role: "model"
}
finish_reason: OTHER
}
Ya, same error as me. I added a retry block in my code to call the API 5 times but it just fails.
Since the API call was not returning anything, I added a delay in my API calls. The delay works in a way that I am able to make more API calls now but it still returns the same error. This makes me wonder if it is related to some quota limit/ rate limit even though the Cloud console is not reporting any over usage.
convert the company below into the official English version.
Response
Response blocked for unknown reason. Try rewriting the prompt.
@lavinigam-gcp, there could be something else at play here because even with safety_settings set to
BLOCK_NONE
, I getFinishReason.OTHER
as a response with:response.candidates[0].finish_reason
I am experiencing the same issue. Why is this issue closed?
@zafercavdar We have updated the mRAG notebook to resolve the issue with the content block. Are you still facing the issue with the updated code? Also, if there's a way you can share description of the document that you are running (or share the doc, if possible) to reproduce the error?
I have re-opened the issue for now.
Could you please share a reproducible example so that I can try to help debug what might be happening exactly? This will help us be able to try to resolve the issue more quickly.
Thanks!
cc @polong-lin @holtskinner
Hi @lavinigam-gcp,
I don't directly use the RAG notebook. I am running benchmarking on Gemini Pro using the Python Client library.
Python version: 3.8 google-cloud-aiplatform version: 1.38.1
Here is my prompt (ie contents
parameter value):
Pro-inflammatory cytokines play a crucial role in the etiology of atopic dermatitis. We demonstrated that Herba Epimedii has anti-inflammatory potential in an atopic dermatitis mouse model; however, limited research has been conducted on the anti-inflammatory effects and mechanism of icariin, the major active ingredient in Herba Epimedii, in human keratinocytes. In this study, we evaluated the anti-inflammatory potential and mechanisms of icariin in the tumor necrosis factor-alpha (TNF-alpha)/interferon-gamma (IFN-gamma)-induced inflammatory response in human keratinocytes (HaCaT cells) by observing these cells in the presence or absence of icariin. We measured IL-6, IL-8, IL-1 beta, MCP-1 and GRO-alpha production by ELISA; IL-6, IL-8, IL-1 beta, intercellular adhesion molecule-1 (ICAM-1) and tachykinin receptor 1 (TACR1) mRNA expression by real-time PCR; and P38-MAPK, P-ERK and P-JNK signaling expression by western blot in TNF-alpha/IFN-gamma-stimulated HaCaT cells before and after icariin treatment. The expression of INF-alpha-R1 and IFN-gamma-R1 during the stimulation of the cell models was also evaluated before and after icariin treatment. We investigated the effect of icariin on these pro-inflammatory cytokines and detected whether this effect occurred via the mitogen-activated protein kinase (MAPK) signal transduction pathways. We further specifically inhibited the activity of two kinases with 20 mu M SB203580 (a p38 kinase inhibitor) and 50 mu M PD98059 (an ERK1/2 kinase inhibitor) to determine the roles of the two signal pathways involved in the inflammatory response. We found that icariin inhibited TNF-alpha/IFN-gamma-induced IL-6, IL-8, IL-1 beta, and MCP-1 production in a dose-dependent manner; meanwhile, the icariin treatment inhibited the gene expression of IL-8, IL-1 beta, ICAM-1 and TACR1 in HaCaT cells in a time- and dose-dependent manner. Icariin treatment resulted in a reduced expression of p-P38 and p-ERK signal activation induced by TNF-alpha/IFN-gamma; however, only SB203580, the p38 alpha/beta inhibitor, inhibited the secretion of inflammatory cytokines induced by TNF-alpha/IFN-gamma in cultured HaCaT cells. The differential expression of TNF-alpha-R1 and IFN-gamma-R1 was also observed after the stimulation of TNF-alpha/IFN-gamma, which was significantly normalized after the icariin treatment. Collectively, we illustrated the anti-inflammatory property of icariin in human keratinocytes. These effects were mediated, at least partially, via the inhibition of substance P and the p38-MAPK signaling pathway, as well as by the regulation of the TNF-alpha-R1 and IFN-gamma-R1 signals. (C) 2015 Elsevier B.V. All rights reserved.
Generate a title for the given scientific paper above.
Other parameters:
generation_config = {
"model": "gemini-1.0-pro-001",
"max_output_tokens": 50,
"top_p": 0.99,
"temperature": 0.2,
"candidate_count": 1,
}
safety_settings: {
HarmCategory.HARM_CATEGORY_UNSPECIFIED: HarmBlockThreshold.BLOCK_ONLY_HIGH,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_ONLY_HIGH,
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_ONLY_HIGH,
}
and stream=False
This API calls returns "FinishReason.OTHER" finish reason and response.text
raises ValueError: Content has no parts
Hi @zafercavdar, I am able to reproduce your issues here:
Thank you for bring our attention to it. Our internal teams are looking into it. Allow us some time to respond to it.
https://console.cloud.google.com/vertex-ai/generative/language/create/text?authuser=0 The vertex-ai console also blocks. The blocks are not random and seem to depend on the keywords included in the output. convert the company below into the official English version.
Lihit Lab Inc
Response
Response blocked for unknown reason. Try rewriting the prompt.
Hi @takeruh, I am also able to reproduce your issue here.
This is currently a bug and as stated in the previous comment, out teams are looking into the issue.
Any updates or a timeline on that? The behaviour is non-deterministic. Even with temperature set to 0.0 the request / response is blocked randomly when using the exactly same prompt. Thats very annoying when trying to build up reliable applications based on Gemini's api. As soon as we lower the temperature below 0.5 the probability of getting rejected by Gemini's API increases. Safety settings are all set to BLOCK_NONE and the lastest version of google-cloud-aiplatform (1.44.0) is in use.
@nmoell Thank you for raising the issue and being patient with it. I have again escalated the issue internally, and will report back as soon as I get an update. In the meantime, would it be possible for you to share any reproducible prompts where you have been observing the issue? Or if you can print the response object and share the “finish_reason” value so that we know what is actually causing the issue.
@nmoell Thank you for raising the issue and being patient with it. I have again escalated the issue internally, and will report back as soon as I get an update. In the meantime, would it be possible for you to share any reproducible prompts where you have been observing the issue? Or if you can print the response object and share the “finish_reason” value so that we know what is actually causing the issue.
I can't disclose the prompt since it is using internal information (just some facts about internal shipping policies) which isn't in any sense offensive or unsafe. What I can share is the response object:
{
"candidates": [
{
"content": {
"role": "model",
"parts": []
},
"finish_reason": 4,
"safety_ratings": [
{
"category": 1,
"probability": 1,
"probability_score": 0.16438228,
"severity": 1,
"severity_score": 0.0715912,
"blocked": false
},
{
"category": 2,
"probability": 1,
"probability_score": 0.33458945,
"severity": 2,
"severity_score": 0.29158565,
"blocked": false
},
{
"category": 3,
"probability": 1,
"probability_score": 0.15507847,
"severity": 1,
"severity_score": 0.1261379,
"blocked": false
},
{
"category": 4,
"probability": 1,
"probability_score": 0.072374016,
"severity": 1,
"severity_score": 0.06548521,
"blocked": false
}
],
"citation_metadata": {
"citations": [
{
"start_index": 151,
"end_index": 400,
"uri": "https://www.REMOVED-OUR-WEBSITE.com/service/",
"title": "",
"license_": ""
},
{
"start_index": 177,
"end_index": 400,
"uri": "",
"title": "",
"license_": ""
}
]
},
"index": 0
}
],
"usage_metadata": {
"prompt_token_count": 849,
"total_token_count": 849,
"candidates_token_count": 0
}
}
Hi,
I encounter the same problem. When I do, the finish_reason
is OTHER
.
What's the meaning of that? It seems to happen randomly with certain prompts.
Here's a small example with a prompt:
from vertexai.preview import generative_models
import vertexai
from vertexai.preview.generative_models import GenerativeModel
model_params = {
"temperature": 0.0,
}
safety_config = {
generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_UNSPECIFIED: generative_models.HarmBlockThreshold.BLOCK_NONE,
}
vertexai.init() # environment variables are set
model = GenerativeModel("gemini-pro")
PROMPT = "Translate the following to Swiss German: 'Hi, my name is Sara.'"
response = model.generate_content(PROMPT, generation_config=model_params, safety_settings=safety_config)
print(response)
print(response.text)
Output:
candidates {
finish_reason: OTHER
}
usage_metadata {
prompt_token_count: 15
total_token_count: 15
}
...
AttributeError: Content has no parts.
I created PR https://github.com/googleapis/python-aiplatform/pull/3518 for the Vertex AI Python SDK to improve the error messages for this behavior. It will throw a ResponseValidationError
with specific details as to why the content is empty/blocked.
You can try diagnosing the issue with this code, before the SDK gets updated:
response = model.generate_content(PROMPT, generation_config=model_params, safety_settings=safety_config)
message = ""
if not response.candidates or response._raw_response.prompt_feedback:
message += (
f"The model response was blocked due to {response._raw_response.prompt_feedback.block_reason}.\n"
f"Block reason message: {response._raw_response.prompt_feedback.block_reason_message}.\n"
)
else:
candidate = response.candidates[0]
message = (
"The model response did not complete successfully.\n"
f"Finish reason: {candidate.finish_reason.name}.\n"
f"Finish message: {candidate.finish_message}.\n"
f"Safety ratings: {candidate.safety_ratings}.\n"
)
print(message)
Note - this doesn't explain the issue when the API returns OTHER
and no block reason. I've been able to reproduce this behavior, and I've not been able to find a definitive reason for it.
For this specific issue, the product development team confirmed the response is blocked by the language filter. I'm going to work on getting this type of error to output a more specific message.
This is the list of supported languages https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models#language-support
Hi,
I encounter the same problem. When I do, the
finish_reason
isOTHER
. What's the meaning of that? It seems to happen randomly with certain prompts.Here's a small example with a prompt:
from vertexai.preview import generative_models import vertexai from vertexai.preview.generative_models import GenerativeModel model_params = { "temperature": 0.0, } safety_config = { generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE, generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE, generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE, generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE, generative_models.HarmCategory.HARM_CATEGORY_UNSPECIFIED: generative_models.HarmBlockThreshold.BLOCK_NONE, } vertexai.init() # environment variables are set model = GenerativeModel("gemini-pro") PROMPT = "Translate the following to Swiss German: 'Hi, my name is Sara.'" response = model.generate_content(PROMPT, generation_config=model_params, safety_settings=safety_config) print(response) print(response.text)
Output:
candidates { finish_reason: OTHER } usage_metadata { prompt_token_count: 15 total_token_count: 15 } ... AttributeError: Content has no parts.
Having a similar issue and it depends on the request and particular words used in the text of the request.
Row 31014 - The meeting on Narodnogo Opolcheniya Street in Sverdlovsk was filled with warmth and cherished memories. completed!
The model response did not complete successfully.
Finish reason: STOP.
Finish message: .
Safety ratings: [category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
, category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
, category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
, category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
].
Row 20209 - A meeting took place on October 6th in Cherepovets, gathering around 1000 teachers. The meeting was organized by Cherepovets educators to advocate for fair working conditions. completed!
The model response did not complete successfully.
Finish reason: OTHER.
Finish message: .
Safety ratings: [].
It seems to be unable to assign a safety rating or just doesn't produce parts
for some requests and throws:
1519 raise ValueError("Multiple content parts are not supported.")
1520 if not self.parts:
-> 1521 raise ValueError("Content has no parts.")
1522 return self.parts[0].text
ValueError: Content has no parts.
It does produce output though and seems to display this error after the response has been generated -- at least in my case.
ValueError Traceback (most recent call last)
Cell In[12], line 2
1 for index, row in df_rand.iterrows():
----> 2 result = get_data(row['summary_en'])
3 df_rand.at[index, 'NER check'] = result
4 print(f"Row {index} - {row['summary_en']} completed!")
Cell In[11], line 24, in get_data(prompt)
22 print(f"Error occurred: {e}. Retrying in 15 seconds")
23 time.sleep(15)
---> 24 return responses.text
File ~/.conda/envs/twitter-thesis/lib/python3.12/site-packages/vertexai/generative_models/_generative_models.py:1405, in GenerationResponse.text(self)
1403 if len(self.candidates) > 1:
1404 raise ValueError("Multiple candidates are not supported")
-> 1405 return self.candidates[0].text
File ~/.conda/envs/twitter-thesis/lib/python3.12/site-packages/vertexai/generative_models/_generative_models.py:1461, in Candidate.text(self)
1459 @property
1460 def text(self) -> str:
-> 1461 return self.content.text
File ~/.conda/envs/twitter-thesis/lib/python3.12/site-packages/vertexai/generative_models/_generative_models.py:1521, in Content.text(self)
1519 raise ValueError("Multiple content parts are not supported.")
1520 if not self.parts:
-> 1521 raise ValueError("Content has no parts.")
1522 return self.parts[0].text
ValueError: Content has no parts.
I set streaming=False
, and that seemed to solve the issue for me, it looks like I was getting multiple candidates in the answer generated with some newline characters.
def generate(query):
prompt = augment_prompt(query)
model = GenerativeModel("gemini-1.0-pro-vision-001")
responses = model.generate_content(
prompt,
generation_config={
"max_output_tokens": 2048,
"temperature": 0.7,
"top_p": 1
},
stream=False,
)
res = []
print(responses.text)
3659 is the new PR to fix this and output more specific errors.
To which PR are you referring?
@holtskinner Could you link the new PR to fix this? Thanks!
@lavinigam-gcp @holtskinner holtskinner
I am using this code but getting this error... kindly help
def generate( prompt: list, max_output_tokens: int = 8000, temperature: int = 0.2, top_p: float = 0.4, stream: bool = False, ) -> GenerationResponse | Iterable[GenerationResponse]:
responses = model.generate_content(
prompt,
generation_config={
"max_output_tokens": max_output_tokens,
"temperature": temperature,
"top_p": top_p,
},
safety_settings={
HarmCategory.HARM_CATEGORY_HATE_SPEECH: BLOCK_LEVEL,
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: BLOCK_LEVEL,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: BLOCK_LEVEL,
HarmCategory.HARM_CATEGORY_HARASSMENT: BLOCK_LEVEL,
},
stream=stream,
)
return responses
def retry_generate(pdf_document: Part, prompt: str, question: str):
predicted = False
while not predicted:
try:
response = generate(
prompt=[pdf_document, prompt.format(question=question)]
)
except Exception as e:
print("sleeping for 2 seconds ...")
print(e)
time.sleep(2)
else:
predicted = True
return response
the outpout for this is
--------------------------------------------------------------------------
ValueError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1750, in Candidate.text(self) 1749 try: -> 1750 return self.content.text 1751 except (ValueError, AttributeError) as e: 1752 # Enrich the error message with the whole Candidate. 1753 # The Content object does not have full information.
File /opt/conda/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1830, in Content.text(self) 1829 if not self.parts: -> 1830 raise ValueError( 1831 "Response candidate content has no parts (and thus no text)." 1832 " The candidate is likely blocked by the safety filters.\n" 1833 "Content:\n" 1834 + _dict_to_pretty_string(self.to_dict()) 1835 ) 1836 return self.parts[0].text
ValueError: Response candidate content has no parts (and thus no text). The candidate is likely blocked by the safety filters. Content: {}
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1667, in GenerationResponse.text(self) 1666 try: -> 1667 return self.candidates[0].text 1668 except (ValueError, AttributeError) as e: 1669 # Enrich the error message with the whole Response. 1670 # The Candidate object does not have full information.
File /opt/conda/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1754, in Candidate.text(self) 1751 except (ValueError, AttributeError) as e: 1752 # Enrich the error message with the whole Candidate. 1753 # The Content object does not have full information. -> 1754 raise ValueError( 1755 "Cannot get the Candidate text.\n" 1756 f"{e}\n" 1757 "Candidate:\n" 1758 + _dict_to_pretty_string(self.to_dict()) 1759 ) from e
ValueError: Cannot get the Candidate text. Response candidate content has no parts (and thus no text). The candidate is likely blocked by the safety filters. Content: {} Candidate: { "finish_reason": "OTHER" }
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last) Cell In[16], line 7 4 pdf_document = Part.from_data(data=fp.read(), mime_type="application/pdf") 6 response = retry_generate(pdf_document, prompt,prompt_test_reports_info) ----> 7 print(response.text)
File /opt/conda/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1671, in GenerationResponse.text(self) 1667 return self.candidates[0].text 1668 except (ValueError, AttributeError) as e: 1669 # Enrich the error message with the whole Response. 1670 # The Candidate object does not have full information. -> 1671 raise ValueError( 1672 "Cannot get the response text.\n" 1673 f"{e}\n" 1674 "Response:\n" 1675 + _dict_to_pretty_string(self.to_dict()) 1676 ) from e
ValueError: Cannot get the response text. Cannot get the Candidate text. Response candidate content has no parts (and thus no text). The candidate is likely blocked by the safety filters. Content: {} Candidate: { "finish_reason": "OTHER" } Response: { "candidates": [ { "finish_reason": "OTHER" } ], "usage_metadata": { "prompt_token_count": 3677, "total_token_count": 3677 } }
The reason for this discrepancy is probably that there is no formal way to retrieve the text from the response without raising an exception.
apiResponse = model.generate_content(prompt)
if ( apiResponse.candidates[0].finish_reason == Candidate.FinishReason.OTHER):
`break`
# In rare cases, candidates[0] may not have a text attribute.
# https://github.com/googleapis/python-aiplatform/blob/94d838d8cfe1599bc2d706e66080c05108821986/vertexai/generative_models/_generative_models.py#L1666
# objResponse["text"] = apiResponse.text # May result in error
if hasattr(apiResponse.candidates[0], "text"): # add this line
objResponse["text"] = apiResponse.candidates[0].text # May result in ValueError
I am also intermittently experiencing this, I cant really find a way to reproduce it. I have a solution that takes a random sample of about 30 files and sends the file list plus the content of 5 of those files and it produces
Response candidate content has no parts (and thus no text). The candidate is likely blocked by the safety filters.
Content:
{}
Candidate:
{
"finish_reason": "OTHER"
}
frequently. This seemed to begin happening more often when passing in markdown files.
Is there some way to get a more detailed error response other than
{
"finish_reason": "OTHER"
}
For me this started happening today at random. By any chance is it related to rate limiting?
The error occurs at the library RPC response, and no further information is available because it does not contain any information. It can also be reproduced with the vertex-ai tool, so it seems to be a server-side problem rather than a library problem. https://console.cloud.google.com/vertex-ai/generative/multimodal/create/text
Interestingly, the occurrence rate of ValueError: Content has no parts. seems to differ depending on the region. For example, in a small sample, europe-west6 is more likely to return a normal response than other regions. It seems that error rates increase when API usage is high in the same region. Since it is linked to unrelated APIs and web tools, some account quota restrictions may be involved.
Even if ValueError: Content has no parts. occurs, it can sometimes work normally by retrying, so I think this problem is more of an infrastructure problem involving RPC rather than a copied instance of generative AI.
If you touch the API response apiResponse.candidates[0].text of model.generate_content, an exception ValueError: Cannot get the Candidate text. will occur, so you can avoid the exception by using the following if statement.
if apiResponse.candidates[0].content.parts:
apiResponse.candidates[0].text
ライブラリrpcのレスポンスの時点でエラーとなり、一切の情報が含まれていないのでこれ以上の情報を得ることはできません。 vertex-aiのツールでも再現するので、ライブラリではなくサーバーサイドの問題のように見えます。 https://console.cloud.google.com/vertex-ai/generative/multimodal/create/text
興味深いことに、リージョンによってValueError: Content has no parts. の発生率が違うようです。 例えば、わずかなサンプルではeurope-west6 はほかのリージョンに比べて正常に応答を返す可能性が高い。 同じリージョンでAPIの使用量が多いとエラー率が上がるようです。 関わりのないAPIとwebツールで連動していることから、何らかのアカウントのクォータ制限が関係しているのかも。
ValueError: Content has no parts.が起きてもリトライにより正常動作する場合もあることから、この問題はgenerative AIのコピーされたインスタンスではなく、もっとrpcがかかわるインフラ寄りの問題なのではと思います。
model.generate_contentのAPIレスポンス apiResponse.candidates[0].text に触れると例外 ValueError: Cannot get the Candidate text. が発生するので、以下の if 文で例外の発生を回避することができます。
if apiResponse.candidates[0].content.parts:
apiResponse.candidates[0].text
Is there a solution to this or does it remain unresolved?
Why is this issue closed? I am seeing the same problem.
# Define safety settings
safety_config = {
generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
}
def gemini_predict(model, document_location, prompt:str, safety_settings=safety_config) -> str:
image = Image.from_bytes(document_location)
document = Part.from_image(image)
response = model.generate_content(
[
document,
prompt,
],
safety_settings=safety_config
)
return response.text
from vertexai.generative_models import GenerativeModel, Part, Image
vertexai.init(project=project_id, location=location)
model = GenerativeModel(model_name="gemini-1.5-flash-001", safety_settings=safety_config,)
ValueError Traceback (most recent call last) File ~/SageMaker/custom-miniconda/miniconda/envs/custom_python/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py:1746, in Candidate.text(self) 1745 try: -> 1746 return self.content.text 1747 except (ValueError, AttributeError) as e: 1748 # Enrich the error message with the whole Candidate. 1749 # The Content object does not have full information.
File ~/SageMaker/custom-miniconda/miniconda/envs/custom_python/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py:1826, in Content.text(self) 1825 if not self.parts: -> 1826 raise ValueError( 1827 "Response candidate content has no parts (and thus no text)." 1828 " The candidate is likely blocked by the safety filters.\n" 1829 "Content:\n" 1830 + _dict_to_pretty_string(self.to_dict()) 1831 ) 1832 return self.parts[0].text
ValueError: Response candidate content has no parts (and thus no text). The candidate is likely blocked by the safety filters. Content: {}
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last) File ~/SageMaker/custom-miniconda/miniconda/envs/custom_python/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py:1667, in GenerationResponse.text(self) 1666 try: -> 1667 return self.candidates[0].text 1668 except (ValueError, AttributeError) as e: 1669 # Enrich the error message with the whole Response. 1670 # The Candidate object does not have full information.
File ~/SageMaker/custom-miniconda/miniconda/envs/custom_python/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py:1750, in Candidate.text(self) 1747 except (ValueError, AttributeError) as e: 1748 # Enrich the error message with the whole Candidate. 1749 # The Content object does not have full information. -> 1750 raise ValueError( 1751 "Cannot get the Candidate text.\n" 1752 f"{e}\n" 1753 "Candidate:\n" 1754 + _dict_to_pretty_string(self.to_dict()) 1755 ) from e
ValueError: Cannot get the Candidate text. Response candidate content has no parts (and thus no text). The candidate is likely blocked by the safety filters. Content: {} Candidate: { "finish_reason": "OTHER" }
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last) Cell In[34], line 6 2 pass 5 file = "masked_filed_location" ----> 6 text = gemini_predict(model, file, "Which form is this tax document. Output only the form type. Nothing else.") 8 print(text)
Cell In[33], line 48, in gemini_predict(model, document_location, prompt, safety_settings) 40 response = model.generate_content( 41 [ 42 document, (...) 45 safety_settings=safety_config 46 ) 47 print(response) ---> 48 return response.text
File ~/SageMaker/custom-miniconda/miniconda/envs/custom_python/lib/python3.11/site-packages/vertexai/generative_models/_generative_models.py:1671, in GenerationResponse.text(self) 1667 return self.candidates[0].text 1668 except (ValueError, AttributeError) as e: 1669 # Enrich the error message with the whole Response. 1670 # The Candidate object does not have full information. -> 1671 raise ValueError( 1672 "Cannot get the response text.\n" 1673 f"{e}\n" 1674 "Response:\n" 1675 + _dict_to_pretty_string(self.to_dict()) 1676 ) from e
ValueError: Cannot get the response text. Cannot get the Candidate text. Response candidate content has no parts (and thus no text). The candidate is likely blocked by the safety filters. Content: {} Candidate: { "finish_reason": "OTHER" } Response: { "candidates": [ { "finish_reason": "OTHER" } ], "usage_metadata": { "prompt_token_count": 274, "total_token_count": 274 } }
Please re-open this issue. I'm still experiencing this problem with Gemini 1.5 Flash in europe-west3!!
I see this issue too. I'll raise it internally with the engineering team.
Still see the issue as well.
I already tried to config safety settings in my python code:
safety_settings = {
generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE,
generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
}
But it not work :)
also getting this error - mine is showing when im trying to add a rag tool
Hi everyone,
It looks like this issue was primarily caused by the language preprocessor classifying certain prompts (especially those with many numbers or mixed languages) as unsupported, leading to the "recitation blocking" and the resulting "OTHER" output [ValueError: Content has nor parts].
The good news is that the internal team recently updated all 1.5* models on August 8th 2024 to remove the language filter. This should resolve the problem.
To fix this: Please update your "google-cloud-aiplatform" package and make sure you're using a Gemini 1.5 family model.
If you continue to experience any issues, please feel free to let us know here or on the official SDK issue page. We're happy to help!
Hello , i successfully run the intro_multimodal_rag example, but when i tried my own pdf I encountered the following error,
, any suggestion?