Open yifanmai opened 3 months ago
Sorry, this seems to be fixed in the latest version of the package, so I will close this.
@yifanmai Can you reopen this issue because still I am facing this issue with following versions.
google-cloud-aiplatform==1.45.0 vertexai==1.43.0
Same here
I faced the same issue too.
Here's the full traceback
Traceback (most recent call last):
File "/app/app/core/custom_chains/streaming_chain.py", line 19, in task
self(input)
File "/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 383, in __call__
return self.invoke(
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 168, in invoke
raise e
File "/usr/local/lib/python3.10/site-packages/langchain/chains/base.py", line 158, in invoke
self._call(inputs, run_manager=run_manager)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 571, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 434, in generate
raise e
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 424, in generate
self._generate_with_cache(
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 608, in _generate_with_cache
result = self._generate(
File "/usr/local/lib/python3.10/site-packages/langchain_community/chat_models/vertexai.py", line 279, in _generate
return generate_from_stream(stream_iter)
File "/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 62, in generate_from_stream
for chunk in stream:
File "/usr/local/lib/python3.10/site-packages/langchain_community/chat_models/vertexai.py", line 378, in _stream
for response in responses:
File "/usr/local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 918, in _send_message_streaming
_append_response(full_response, chunk)
File "/usr/local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1591, in _append_response
_append_gapic_response(
File "/usr/local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1611, in _append_gapic_response
_append_gapic_candidate(base_response.candidates[idx], candidate)
File "/usr/local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1633, in _append_gapic_candidate
_append_gapic_content(base_candidate.content, new_candidate.content)
File "/usr/local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py", line 1651, in _append_gapic_content
raise ValueError(
ValueError: Content roles do not match: model != ```
@bhavan-kaya This is related to #257
Please see also https://issuetracker.google.com/issues/331677495 - you can comment there.
Can you try with the following versions?
google-cloud-aiplatform==1.46.0 vertexai==1.46.0
Can you try with the following versions?
google-cloud-aiplatform==1.46.0 vertexai==1.46.0
Still the same.
According to my observation, the root cause for this issue is related with #257. For some queries, gemini failed to generate the response because of "finish_reason: RECITATION".
Therefore the "new_content" will be an empty string which does not include the role (empty role).
Traceback (most recent call last):
File "C:\Users\ASUS\PycharmProjects\pythonProject18\one.py", line 451, in <module>
generate_streaming_mistral_response(
File "C:\Users\ASUS\PycharmProjects\pythonProject18\one.py", line 60, in generate_streaming_mistral_response
for chunk in chain_with_summarization.stream(user_input, {"configurable": {"session_id": conversation_id}}):
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2822, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2809, in transform
yield from self._transform_stream_with_config(
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 1880, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2773, in _transform
for output in final_pipeline:
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 4669, in transform
yield from self.bound.transform(
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 4669, in transform
yield from self.bound.transform(
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2809, in transform
yield from self._transform_stream_with_config(
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 1880, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2773, in _transform
for output in final_pipeline:
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 4669, in transform
yield from self.bound.transform(
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2809, in transform
yield from self._transform_stream_with_config(
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 1880, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 2773, in _transform
for output in final_pipeline:
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\output_parsers\transform.py", line 50, in transform
yield from self._transform_stream_with_config(
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 1880, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\output_parsers\transform.py", line 29, in _transform
for chunk in input:
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\runnables\base.py", line 1300, in transform
yield from self.stream(final, config, **kwargs)
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 241, in stream
raise e
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_core\language_models\chat_models.py", line 223, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\langchain_google_vertexai\chat_models.py", line 527, in _stream
for response in responses:
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\vertexai\generative_models\_generative_models.py", line 968, in _send_message_streaming
_append_response(full_response, chunk)
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\vertexai\generative_models\_generative_models.py", line 1877, in _append_response
_append_gapic_response(
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\vertexai\generative_models\_generative_models.py", line 1899, in _append_gapic_response
_append_gapic_candidate(base_response.candidates[idx], candidate)
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\vertexai\generative_models\_generative_models.py", line 1922, in _append_gapic_candidate
_append_gapic_content(base_candidate.content, new_candidate.content)
File "C:\Users\ASUS\PycharmProjects\pythonProject18\venv\lib\site-packages\vertexai\generative_models\_generative_models.py", line 1942, in _append_gapic_content
raise ValueError(
ValueError: Content roles do not match: model !=
Have this issue as well with both gemini-1.5-flash
and gemini-1.5-pro
.
Environment details
google-cloud-aiplatform
version: 1.38.1Steps to reproduce
Send the prompt listed below to
gemini-1.0-pro-001
.Code example
Stack trace
Expected Behavior
The chunks returned are as follows:
Instead of getting an error, I would expect these two chunks to be successfully merged into the following. Alternatively, I would expect the error message to be less cryptic.
Edit: Changed example prompt to shorter example.