Closed northdpole closed 7 months ago
π€
Thank you for reporting this issue with LangChain and providing a potential fix! Your proposed solution seems reasonable. I encourage you to create a pull request with your changes so that the maintainers can review and consider merging it into the codebase. Your contribution will help improve LangChain for everyone. Keep up the great work!
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Hi, @northdpole
I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, Langchain was crashing when retrieving results from VertexAI Codey models due to a bug in the llms/vertexai.py
file. I acknowledged the report and encouraged you to create a pull request with the suggested fix for review by the maintainers.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
System Info
Hey folks, I think i stumbled on a bug (or i'm using langchain wrong)
Langchain version : 0.0.320 Platform Ubuntu 23.04 Python: 3.11.4
Who can help?
@hwchase17 , @agola11
Information
Related Components
Reproduction
Run the following: ` llm = llms.VertexAI(model_name="code-bison@001", max_output_tokens=1000, temperature=0.0) prediction = llm.predict(""" write a fibonacci sequence in python""")
from pprint import pprint pprint(prediction)`
Expected behavior
We get a prediction
(adding more info since the form has ran out of fields)
I think the bug is in llms/vertexai.py:301 . Variable res is a TextGenerationResponse as opposed to MultiCandidateTextGenerationResponse
Hence there are no "candidates" as you would expect from a chat model.
This happens because: Google's sdk (vertexai/language_models_language_models.py) method:
returns a
MultiCandidateTextGenerationResponse
but bothCodeChatSession
andCodeGenerationModel
return aTextGenerationResponse
I think the fix might be replacing
generations.append([_response_to_generation(r) for r in res.candidates])
withhappy to send a pr if I helps