Closed xasxin closed 4 months ago
🤖
Hi @xasxin,
Good to see you again! I'm diving into the issue you've raised about the ConversationChain in LangChain. I'll get back to you with a detailed response shortly.
🤖
Based on the information you've provided, it seems like the error you're encountering is due to a mismatch between the input variables defined in the prompt and the ones provided when formatting the prompt.
In the LangChain framework, when you create a PromptTemplate
, you need to specify the input_variables
that will be used in the template
. If you provide more or fewer variables when formatting the prompt, or if the variable names do not match, you will get a ValueError
.
Here is an example of how to do this correctly:
from langchain.prompts.prompt import PromptTemplate
# Define the template and input variables
template = "This is a {foo} test."
input_variables = ["foo"]
# Create the PromptTemplate
prompt = PromptTemplate(input_variables=input_variables, template=template)
# Format the prompt with the correct input variables
result = prompt.format(foo="bar")
# The result should be "This is a bar test."
print(result)
In this example, the input_variables
["foo"] match the variable used in the template
and the variable provided when formatting the prompt. If you were to provide a different variable when formatting the prompt, such as prompt.format(bar="bar")
, you would get a ValueError
because "bar" is not one of the input_variables
defined in the PromptTemplate
.
You can find more information about this in the LangChain's test cases for prompts here.
As for why this error does not occur with LLMChain
, it's because LLMChain
does not perform the same validation as the validate_prompt_input_variables
method in the ConversationChain
class. The LLMChain
class is a more general class, while the ConversationChain
class is specifically designed for conversations and therefore has additional requirements and checks. You can find more information about this in the LangChain's source code for ConversationChain and LLMChain.
I hope this helps! If you have any other questions, feel free to ask.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
I'm also running into this and it's not the template. Code:
import os
from langchain.chains import ConversationChain
from langchain.prompts.prompt import PromptTemplate
from langchain.memory import ConversationSummaryMemory
from langchain_openai import OpenAI
CHARACTER_TEMPLATE = '''You are a character in a story. Your name is {character_name}. The input you receive is from a human Player.
Previous conversation history: {history}
New input: {input}
Response:'''
llm = OpenAI( temperature=1, openai_api_key=os.environ.get('OPENAPI_API_KEY'), max_tokens=1024)
character_prompt = PromptTemplate( input_variables=['history', 'input', 'character_name'], template=CHARACTER_TEMPLATE )
conversation = ConversationChain( llm=llm, prompt=character_prompt, memory=ConversationSummaryMemory(llm=llm) )
Message: Got unexpected prompt input variables. The prompt expects ['character_name', 'history', 'input'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)
Have same issue any solution ?
Issue you'd like to raise.
Hi, I'm trying to make a chatbot using conversationchain. The prompt takes three variables, “history” from memory, user input: "input", and another variable say variable3. But it has an error “Got unexpected prompt input variables. The prompt expects ['history', 'input', 'variable3'], but got ['history'] as inputs from memory, and input as the normal input key. ”
This error doesn't occur if I'm using llmchain. So how can I prevent it if I want to use conversationchain?
Thanks
Suggestion:
No response