Closed vecorro closed 6 months ago
I just tried to use langchain.output_parsers.json.parse_partial_json
as the output parser for the ConversationChain hoping it could fix the malform JSON output, however it is not possible to initialize ConversationChain with that output parser as I get this error:
ValidationError: 1 validation error for ConversationChain
output_parser
instance of BaseLLMOutputParser expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseLLMOutputParser)
Hi, @vecorro,
I'm helping the LangChain team manage their backlog and am marking this issue as stale. It seems that you encountered an error when attempting to build a MultiRetrievalQAChain using Llama2 chat models served by vLLM. The error "Got invalid JSON object" was related to a deprecated method and the lack of guidance on implementing an output parser for the LLM (ConversationChain) chain. In a recent comment, you attempted to use a specific output parser but encountered a validation error. Further guidance is being sought on how to resolve this issue.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.
Thank you for your understanding and cooperation.
System Info
System: LangChain 0.0.321 Python 3.10
I'm trying to build a MultiRetrievalQAChain using only Llama2 chat models served by vLLM (no OpenAI). For that end I have created a ConversationChain that acts as the default chain for the MultiRetrievalQAChain. I have customized the prompts for both chains to meet LLama 2 Chat format requirements. It looks like the routing chain works properly but I'm getting the following exception:
Here the routing and generation trace:
The issue seems to be related to a warning that I'm also getting:
llm.py:280: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
Unfortunately it is unclear how one is supposed to implement an output parser for the LLM (ConversationChain) chain that meets expectations from the MultiRetrievalQAChain. The documentation for these chains relies a lot on OpenAI models to do the formatting but there's no much guidance on how to do it with other LLMs.
Any guidance on how to move forward would be appreciated.
Here my code:
Who can help?
No response
Information
Related Components
Reproduction
I included the entire script I'm using
Expected behavior
Proper query/question routing to the retriever better suited to provide the content required by the LLM to answer a question.