Closed UsamaHussain8 closed 2 months ago
prompt_template = """You are engaged in conversation with a human,
your responses will be generated using a comprehensive long document as a contextual reference.
You can summarize long documents and also provide comprehensive answers, depending on what the user has asked.
You also take context in consideration and answer based on chat history.
Chat History: {context}
Question: {question}
Answer :
"""
From my understanding you don't need to write the prompt_template in a f string and write the context and question in single curly braces
prompt_template = """You are engaged in conversation with a human, your responses will be generated using a comprehensive long document as a contextual reference. You can summarize long documents and also provide comprehensive answers, depending on what the user has asked. You also take context in consideration and answer based on chat history. Chat History: {context} Question: {question} Answer : """
From my understanding you don't need to write the prompt_template in a f string and write the context and question in single curly braces
Yeah I corrected that. However, it results in the same error.
hello , did you find any solutions , i got the same issue
any update here? im noticing that it works for the first .invoke() call, but fails with this same exception on the second call of .invoke() after updating/including a non-empty chat history. I notice the chain with an empty chat history goes from ConversationalRetrievalChain -> StuffDocumentsChain -> LLMChain vs with a non-empty chat history goes from ConversationalRetrievalChain -> LLMChain
UPDATE/WORKAROUND: This seems to be the case because when there is a chat history, it wants to rephrase the chat history and question into a condensed "standalone" question via the LLMChain. However, despite ConversationalRetrievalChain.from_llm seemingly setting the default question_generator to a chain with a prompt that doesn't have "context," it seems some default prompt with "context" as an input is used. So, the question_generator needs to be manually set after the from_llm function is used. Here is what I did:
CONDENSE_PROMPT = PromptTemplate.from_template(
"Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.\n\n"
"Chat History:\n{chat_history}\nFollow Up Input: {question}\nStandalone question:"
)
question_generator = LLMChain(
llm=llm,
prompt=CONDENSE_PROMPT
)
doc_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=retriever,
condense_question_prompt=qa_prompt,
# question_generator=question_generator, # setting this DOES NOT work either
return_source_documents=True,
rephrase_question=True,
verbose=True,
)
doc_chain.question_generator = question_generator
doc_chain.ainvoke({'question': "some question there", 'chat_history': chat_history_list})
Checked other resources
Example Code
The code for this problem is: