Closed rupalikaushik21 closed 2 months ago
π€
Hey @rupalikaushik21! Great to see you diving into another challenge with LangChain. How's the coding world treating you today?
Based on the information you've provided, it seems like there might be an issue with overlapping keys between the memory and the input. In the validate_prompt_input_variables
method of the ConversationChain
class, a ValueError is raised if the input key is found in the memory keys or if the expected keys (memory keys + input key) do not match the prompt variables.
Here's the relevant code from the LangChain repository:
@root_validator()
def validate_prompt_input_variables(cls, values: Dict) -> Dict:
"""Validate that prompt input variables are consistent."""
memory_keys = values["memory"].memory_variables
input_key = values["input_key"]
if input_key in memory_keys:
raise ValueError(
f"The input key {input_key} was also found in the memory keys "
f"({memory_keys}) - please provide keys that don't overlap."
)
prompt_variables = values["prompt"].input_variables
expected_keys = memory_keys + [input_key]
if set(expected_keys) != set(prompt_variables):
raise ValueError(
"Got unexpected prompt input variables. The prompt expects "
f"{prompt_variables}, but got {memory_keys} as inputs from "
f"memory, and {input_key} as the normal input key."
)
return values
This code is located in the libs/langchain/langchain/chains/conversation/base.py file.
Please ensure that your input key is not present in the memory keys and that the expected keys match the prompt variables. If these conditions are not met, the function call will not work as expected.
Regarding the role of the ConversationChain
class in the LangChain framework, I wasn't able to find specific information within the repository. It might be helpful to review the documentation or source code for more details.
If you continue to experience issues, please provide more information about your implementation, and I'll do my best to assist you further.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
I am still facing issues..
Below is the complete code. I am using to implement openai function calling when leveraging chains
llm = ChatOpenAI(temperature=0.0, model="gpt-3.5-turbo-0613",openai_api_key="mykey")
from langchain.chains import LLMChain
from langchain.chains import ConversationChain
from langchain_core.utils.function_calling import convert_to_openai_tool
def misdoFormula(a: int, b: int) -> int:
"""Misdo formula two integers together.
Args:
a: First integer
b: Second integer
"""
return (a * b) / 3
import json
print(json.dumps(convert_to_openai_tool(misdoFormula)))
memory = ConversationBufferMemory()
conversation = ConversationChain(
llm=llm,
memory = memory,
verbose=True
)
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
message= conversation.invoke(
[SystemMessage(content="You are an AI model. Don't Assume information when not provided. Leverage tools passed as input.Ask required input one at a time"),
HumanMessage(content="can you help me claculate misdo formula for 10 and 3")],
tools=[convert_to_openai_tool(misdoFormula)],
agent=AgentType.OPENAI_FUNCTIONS,
memory=memory)
message
where you able to solve this issue?
Checked other resources
Example Code
the following code """
from langchain.chains import LLMChain from langchain.chains import ConversationChain my_functions = [ { 'name': 'raise_ticket', 'description': 'Get the details for a ticket', 'parameters': { 'type': 'object', 'properties': { 'projectName':{ 'type': 'string', 'description': "Project Name : (e.g. 'ABC', 'XYZ')" }, 'issueType':{ 'type': 'string', 'description': "Issue Type : (e.g. 'Change Request', 'Service Request')" }, }, ... ... "required":["projectName","issueType"] }
}
]
llm = ChatOpenAI(temperature=0.0, model="gpt-3.5-turbo-0613")
memory = ConversationBufferMemory()
conversation = ConversationChain( llm=llm, memory = memory, verbose=False )
message= conversation.invoke([HumanMessage(content='What are the choices of the Issue Type')], functions=my_functions, memory=memory)
"""
Error Message and Stack Trace (if applicable)
Not an error but Function call is not happening. I am always getting a generic response from the model.
Description
I was expecting the langchain library to recognize the function... but it is not recognizing it.
Same function works with OpenAI
System Info
langchain==0.1.4 langchain-community==0.0.16 langchain-core==0.1.17