langchain-ai / langchain

πŸ¦œπŸ”— Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.45k stars 15.27k forks source link

AgentExecutor not working with MultiPromptChain #9416

Closed Alex-Karmazin closed 8 months ago

Alex-Karmazin commented 1 year ago

System Info

Windows 10 Python 3.9.7 langchain 0.0.236

Who can help?

@hwchase17 @agola11 I have problem to make work together MultiPromptChain and AgentExecutor. Problem actually is trivial MultiPromptChain.destination_chains has type of Mapping[str, LLMChain] and AgentExecutor did not fit in this definition. Second AgentExecutor has output_keys = ["output"] but MultiPromptChain expects ["text"]. My workaround is to make custom class class CustomRouterChain(MultiRouteChain): @property def output_keys(self) -> List[str]: return ["output"]

But it would be more user friendly if these two classes could work together out of the box according to your tutorials. Thanks.

Information

Related Components

Reproduction

from langchain.chains.router import MultiPromptChain from langchain.chains import ConversationChain from langchain.chains.llm import LLMChain from langchain.prompts import PromptTemplate from langchain.agents import tool

from langchain.agents import initialize_agent from langchain.agents import AgentType from langchain.llms import OpenAI

physics_template = """You are a very smart physics professor. \ You are great at answering questions about physics in a concise and easy to understand manner. \ When you don't know the answer to a question you admit that you don't know.

Here is a question: {input}"""

math_template = """You are a very good mathematician. You are great at answering math questions. \ You are so good because you are able to break down hard problems into their component parts, \ answer the component parts, and then put them together to answer the broader question.

Here is a question: {input}"""

@tool def text_length(input: str) -> int: """This tool returns exact text length. Use it when you need to measure text length. It inputs text string.""" return len(input)

llm = OpenAI() agent = initialize_agent([text_length], llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)

prompt_infos = [ { "name": "physics", "description": "Good for answering questions about physics", "prompt_template": physics_template, "agent": None, }, { "name": "math", "description": "Good for answering math questions", "prompt_template": math_template, "agent": None, }, { "name": "text_length", "description": "Good for answering questions about text length", "prompt_template": "{input}", "agent": agent, }, ]

destination_chains = {} for p_info in prompt_infos: name = p_info["name"] prompt_template = p_info["prompt_template"] prompt = PromptTemplate(template=prompt_template, input_variables=["input"]) chain = LLMChain(llm=llm, prompt=prompt) if p_info["agent"] is None: destination_chains[name] = chain else: destination_chains[name] = p_info["agent"] default_chain = ConversationChain(llm=llm, output_key="text")

from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE

destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos] destinations_str = "\n".join(destinations) router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str) router_prompt = PromptTemplate( template=router_template, input_variables=["input"], output_parser=RouterOutputParser(), ) router_chain = LLMRouterChain.from_llm(llm, router_prompt)

chain = MultiPromptChain( router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True, )

print(chain.run("What is black body radiation?")) print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3")) print(chain.run("What length have following text: one, two, three, four"))

Expected behavior

Runs without errors.

dosubot[bot] commented 1 year ago

πŸ€–

Hello,

Thank you for bringing this to our attention. It seems like you're trying to use an AgentExecutor as a destination chain in a MultiPromptChain, but you're running into issues because AgentExecutor doesn't fit the expected type for destination_chains in MultiPromptChain, and because the output_keys don't match.

The MultiPromptChain class in LangChain is designed to work with instances of LLMChain as destination chains. The destination_chains attribute is a mapping of names to LLMChain instances. An AgentExecutor is not a subclass of LLMChain, which is why you're seeing a type mismatch.

As for the output_keys, the MultiPromptChain class expects the destination chains to produce an output with the key "text". However, the AgentExecutor class produces an output with the key "output". This is why you're seeing a mismatch in the expected output keys.

Your workaround of creating a custom class that overrides the output_keys method to return ["output"] is a valid solution. However, I understand that you'd prefer if MultiPromptChain and AgentExecutor could work together out of the box.

One possible solution could be to create a subclass of AgentExecutor that overrides the output_keys method to return ["text"], and also wraps the AgentExecutor in an LLMChain so that it can be used as a destination chain in a MultiPromptChain. Here's an example of how you could do this:

class CustomAgentExecutor(AgentExecutor):
    @property
    def output_keys(self) -> List[str]:
        return ["text"]

    def run(self, input: Any, **kwargs: Any) -> Dict[str, Any]:
        result = super().run(input, **kwargs)
        return {"text": result["output"]}

class AgentLLMChain(LLMChain):
    def __init__(self, agent_executor: AgentExecutor, **kwargs: Any):
        super().__init__(**kwargs)
        self.agent_executor = agent_executor

    def run(self, input: Any, **kwargs: Any) -> Dict[str, Any]:
        return self.agent_executor.run(input, **kwargs)

# Now you can use an AgentLLMChain as a destination chain in a MultiPromptChain
destination_chains[name] = AgentLLMChain(CustomAgentExecutor(agent))

This is just a workaround, and it might not be the most elegant solution. We'll consider your feedback and look into ways to make MultiPromptChain and AgentExecutor more compatible in future versions of LangChain.

I hope this helps! Let me know if you have any other questions.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a πŸ‘ if this is helpful and πŸ‘Ž if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

darshan2203 commented 1 year ago

System Info

Python 3.11.4 langchain 0.0.279

@Alex-Karmazin did it work for you?

Problem:

I tried this approach but when I execute CustomAgentExecutor(agent), it's throwing below error.

TypeError: Serializable.__init__() takes 1 positional argument but 2 were given

mvkvc commented 1 year ago

Also have this error where I am trying to route between different agents. Is this a supported workflow or is there a better approach?

davidmarcemps commented 1 year ago

@dosu-bot, @baskaryan

Same problem here. I'm getting the same error: TypeError: Serializable.__init__() takes 1 positional argument but 2 were given

Any help, please?

@darshan2203, @mvkvc, were you able to fix it?

Thanks in advance

dosubot[bot] commented 9 months ago

Hi, @Alex-Karmazin

I'm helping the LangChain team manage their backlog and am marking this issue as stale. The issue you raised involves incompatibility between MultiPromptChain and AgentExecutor in the langchain library, with a suggested workaround using a custom class. Other users also reported similar issues and sought assistance, but the problem remains unresolved.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and cooperation.