langchain-ai / langchain

πŸ¦œπŸ”— Build context-aware reasoning applications
https://python.langchain.com
MIT License
92.31k stars 14.75k forks source link

Langchain crashes when retrieving results from vertexai codey models #12156

Closed northdpole closed 7 months ago

northdpole commented 10 months ago

System Info

Hey folks, I think i stumbled on a bug (or i'm using langchain wrong)

Langchain version : 0.0.320 Platform Ubuntu 23.04 Python: 3.11.4

Who can help?

@hwchase17 , @agola11

Information

Related Components

Reproduction

Run the following: ` llm = llms.VertexAI(model_name="code-bison@001", max_output_tokens=1000, temperature=0.0) prediction = llm.predict(""" write a fibonacci sequence in python""")

from pprint import pprint pprint(prediction)`

Expected behavior

We get a prediction

(adding more info since the form has ran out of fields)

I think the bug is in llms/vertexai.py:301 . Variable res is a TextGenerationResponse as opposed to MultiCandidateTextGenerationResponse

Hence there are no "candidates" as you would expect from a chat model.

This happens because: Google's sdk (vertexai/language_models_language_models.py) method:

_ChatSessionBase. _parse_chat_prediction_response

returns a MultiCandidateTextGenerationResponse but both CodeChatSession and CodeGenerationModel return a TextGenerationResponse

I think the fix might be replacing

generations.append([_response_to_generation(r) for r in res.candidates]) with


                if self.is_codey_model:
                    generations.append([_response_to_generation(res)])
                else:
                    generations.append([_response_to_generation(r) for r in res.candidates])

happy to send a pr if I helps

dosubot[bot] commented 10 months ago

πŸ€–

Thank you for reporting this issue with LangChain and providing a potential fix! Your proposed solution seems reasonable. I encourage you to create a pull request with your changes so that the maintainers can review and consider merging it into the codebase. Your contribution will help improve LangChain for everyone. Keep up the great work!


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

dosubot[bot] commented 7 months ago

Hi, @northdpole

I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, Langchain was crashing when retrieving results from VertexAI Codey models due to a bug in the llms/vertexai.py file. I acknowledged the report and encouraged you to create a pull request with the suggested fix for review by the maintainers.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.