Closed Tisha-linkenite closed 1 month ago
You are not using the correct input_key that goes along with the memory_key. Please see below:
question_generator_template = PromptTemplate(
input_variables=[ # your input variables are not correct.
"chat_history",
"question",
],
template= (
"""
Combine the chat history and follow up question into a standalone question.
Chat History: {chat_history}
Follow up question: {question}
""")
)
question_generator_chain = LLMChain(
llm=llm,
prompt=question_generator_template,
)
# Create retrieval chain
retrieval_chain = ConversationalRetrievalChain(
combine_docs_chain=document_chain,
question_generator=question_generator_chain,
retriever=retriever,
memory=ConversationBufferMemory(memory_key="chat_history", input_key="question") # should use question as input key
)
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
(venv) PS C:\Users\User\Desktop\Linkenite\MarketingAI MVP> streamlit run apporiginal.py
USER_AGENT environment variable not set, consider setting it to identify your requests. C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain_core_api\deprecation.py:139: LangChainDeprecationWarning: The class
response = get_LLM_response(prompt, context, vector_store)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\apporiginal.py", line 109, in get_LLM_response
response = retrieval_chain.invoke({"question": query, "context": documents, "input": ""})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\chains\base.py", line 161, in invoke
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\chains\base.py", line 460, in prep_outputs
self.memory.save_context(inputs, outputs)
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\memory\chat_memory.py", line 55, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
LLMChain
was deprecated in LangChain 0.1.17 and will be removed in 0.3.0. Use RunnableSequence, e.g.,prompt | llm
instead. warn_deprecated( C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain_core_api\deprecation.py:139: LangChainDeprecationWarning: The classConversationalRetrievalChain
was deprecated in LangChain 0.1.17 and will be removed in 0.3.0. Use create_history_aware_retriever together with create_retrieval_chain (see example in docstring) instead. warn_deprecated( 2024-06-21 12:54:19.810 Uncaught app exception Traceback (most recent call last): File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 589, in _run_script exec(code, module.dict) File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\apporiginal.py", line 147, inFile "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\memory\chat_memory.py", line 51, in _get_input_output
KeyError: ''
Description
I am trying to invoke a retrieval chain with three parameters passed {"question": query, "context": documents, "input": ""} the result throws a KeyError related to [ output_key ].
When I passed output_key like this {"question": query, "context": documents, "input": ", "output_key": ""} it gives another error.
The error comes from line 51 of langchain/memory/chat_memory.py -> in _get_input_output return inputs[prompt_input_key], outputs[output_key]