Closed ulyaatsaniya closed 7 months ago
🤖
Hello,
The error you're encountering is due to the candidate_count
parameter being passed to the predict
method of the TextGenerationModel
class, which does not expect this parameter. This is happening in the _completion_with_retry
method of the ChatVertexAI
class in the vertexai.py
file.
Here is the relevant code:
msg_params = {}
if "candidate_count" in params:
msg_params["candidate_count"] = params.pop("candidate_count")
chat = self._start_chat(history, **params)
response = chat.send_message(question.content, **msg_params)
To resolve this issue, you should remove the candidate_count
parameter from the msg_params
dictionary before calling the send_message
method. If you need to use candidate_count
for some other purpose, you should handle it separately.
Unfortunately, I couldn't find any specific documentation about the candidate_count
parameter within the LangChain repository. If you could provide more details about how you're using this parameter, I might be able to give a more specific solution.
I hope this helps! If you have any other questions or need further clarification, feel free to ask.
Best regards.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot
Hi, it still not working
I was calling it with the retriever
from langchain.retrievers import GoogleVertexAISearchRetriever, GoogleVertexAIMultiTurnSearchRetriever import vertexai from langchain.llms import VertexAI import langchain
vertexai.init(project=PROJECT_ID, location=REGION) llm = VertexAI(model_name=MODEL)
retriever = GoogleVertexAIMultiTurnSearchRetriever( project_id=PROJECT_ID, location_id=DATA_STORE_LOCATION, data_store_id=DATA_STORE_ID, max_extractive_answer_count=3, get_extractive_answers=True )
prompt = PromptTemplate( template=prompt_template, input_variables=["context", "question"] )
qa_chain = RetrievalQA.from_llm( llm=llm, prompt=prompt, retriever=retriever, return_source_documents=True )
query = "How to get insurance benefit?" results = qa_chain({"query": query})
🤖
Hello,
Thank you for reaching out and providing a detailed description of the issue you're facing. The error you're encountering is due to the candidate_count
argument being passed to the send_message
and send_message_async
methods of the ChatSession
or CodeChatSession
object, which do not accept this argument.
To resolve this issue, you should remove the lines of code that extract the candidate_count
parameter from the params
dictionary and add it to the msg_params
dictionary in the ChatVertexAI
class. Here are the corrected _generate
and _agenerate
methods:
def _generate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
stream: Optional[bool] = None,
**kwargs: Any,
) -> ChatResult:
should_stream = stream if stream is not None else self.streaming
if should_stream:
stream_iter = self._stream(
messages, stop=stop, run_manager=run_manager, **kwargs
)
return _generate_from_stream(stream_iter)
question = _get_question(messages)
history = _parse_chat_history(messages[:-1])
params = self._prepare_params(stop=stop, stream=False, **kwargs)
examples = kwargs.get("examples") or self.examples
if examples:
params["examples"] = _parse_examples(examples)
chat = self._start_chat(history, **params)
response = chat.send_message(question.content)
generations = [
ChatGeneration(message=AIMessage(content=r.text))
for r in response.candidates
]
return ChatResult(generations=generations)
async def _agenerate(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> ChatResult:
if "stream" in kwargs:
kwargs.pop("stream")
logger.warning("ChatVertexAI does not currently support async streaming.")
question = _get_question(messages)
history = _parse_chat_history(messages[:-1])
params = self._prepare_params(stop=stop, **kwargs)
examples = kwargs.get("examples", None)
if examples:
params["examples"] = _parse_examples(examples)
chat = self._start_chat(history, **params)
response = await chat.send_message_async(question.content)
generations = [
ChatGeneration(message=AIMessage(content=r.text))
for r in response.candidates
]
return ChatResult(generations=generations)
In these corrected methods, the send_message
and send_message_async
methods are called with only the question.content
argument, and no msg_params
dictionary is used. This should resolve the TypeError you're encountering.
Please let me know if this resolves your issue or if you have any other questions.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@desu-bot
Still doesn't work
I am encountering the same issue
llm = VertexAI(temperature=0, model_name="text-bison@001") my_text = "What day comes after Friday?" llm(my_text)
TypeError: _TextGenerationModel.predict() got an unexpected keyword argument 'candidate_count'
What version of langchain are you using?
Hi @angussaw I'm using langchain version 0.0.317 released on Oct 18, 2023
downgrading to 0.0.310 works for me
Yes, but GoogleVertexAIMultiTurnSearchRetriever cannot be imported on version above 0.0.310
@ulyaatsaniya
Was facing same issue while working with langchain, vertexAI, Palm-2 and chromaDB. I downgraded 'google-cloud-aiplatform' and 'langchain' and it worked for me. Downgraded to following:
google-cloud-aiplatform==1.34.0 langchain==0.0.310
If langchain downgrading is not feasible for you (as you mentioned earlier), try downgrading just 'google-cloud-aiplatform'.
Hi, @ulyaatsaniya,
I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, you encountered a TypeError when using TextGenerationModel.predict() with an unexpected keyword argument 'candidate_count' while building a ChatBot with Langchain on VertexAI. There were various suggestions and discussions on potential solutions, and ultimately, bumping 'google-cloud-aiplatform' to version 1.35.0 resolved the issue.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your understanding and contribution to LangChain!
Issue you'd like to raise.
I got this error when I built a ChatBot with Langchain using VertexAI. I'm seeing this error and couldn't find any details so far.
File /opt/conda/lib/python3.10/site-packages/langchain/llms/vertexai.py:100, in completion_with_retry.._completion_with_retry(*args, kwargs)
98 @retry_decorator
99 def _completion_with_retry(*args: Any, *kwargs: Any) -> Any:
--> 100 return llm.client.predict(args, kwargs)
TypeError: TextGenerationModel.predict() got an unexpected keyword argument 'candidate_count'
Suggestion:
No response