langchain-ai / langgraph

Build resilient language agents as graphs.
https://langchain-ai.github.io/langgraph/
MIT License
6.7k stars 1.08k forks source link

No {replan} variable in planner_prompt in the LLM Compiler example and the wfh/llm-compiler LangShimth's hub prompt template? #1513

Closed maciejNisztuk closed 2 months ago

maciejNisztuk commented 2 months ago

Checked other resources

Example Code

prompt = ChatPromptTemplate.from_messages([
  ("system", "Given a user query, create a plan to solve it with the utmost parallelizability. Each plan should comprise an action from the following {num_tools} types:
{tool_descriptions}
{num_tools}. join(): Collects and combines results from prior actions.

 - An LLM agent is called upon invoking join() to either finalize the user query or wait until the plans are executed.
 - join should always be the last action in the plan, and will be called in two scenarios:
   (a) if the answer can be determined by gathering the outputs from tasks to generate the final response.
   (b) if the answer cannot be determined in the planning phase before you execute the plans. Guidelines:
 - Each action described above contains input/output types and description.
    - You must strictly adhere to the input and output types for each action.
    - The action descriptions contain the guidelines. You MUST strictly follow those guidelines when you use the actions.
 - Each action in the plan should strictly be one of the above types. Follow the Python conventions for each action.
 - Each action MUST have a unique ID, which is strictly increasing.
 - Inputs for actions can either be constants or outputs from preceding actions. In the latter case, use the format $id to denote the ID of the previous action whose output will be the input.
 - Always call join as the last action in the plan. Say '<END_OF_PLAN>' after you call join
 - Ensure the plan maximizes parallelizability.
 - Only use the provided action types. If a query cannot be addressed using these, invoke the join action for the next steps.
 - Never introduce new actions other than the ones provided."),
  ("placeholder", "{messages}"),
  ("system", "Remember, ONLY respond with the task list in the correct format! E.g.:
idx. tool(arg_name=args)"),
])

def create_planner(
    llm: BaseChatModel, tools: Sequence[BaseTool], base_prompt: ChatPromptTemplate
):
    tool_descriptions = "\n".join(
        f"{i+1}. {tool.description}\n"
        for i, tool in enumerate(
            tools
        )  # +1 to offset the 0 starting index, we want it count normally from 1.
    )
    planner_prompt = base_prompt.partial(
        replan="",
        num_tools=len(tools)
        + 1,  # Add one because we're adding the join() tool at the end.
        tool_descriptions=tool_descriptions,
    )
    replanner_prompt = base_prompt.partial(
        replan=' - You are given "Previous Plan" which is the plan that the previous agent created along with the execution results '
        "(given as Observation) of each plan and a general thought (given as Thought) about the executed results."
        'You MUST use these information to create the next plan under "Current Plan".\n'
        ' - When starting the Current Plan, you should start with "Thought" that outlines the strategy for the next plan.\n'
        " - In the Current Plan, you should NEVER repeat the actions that are already executed in the Previous Plan.\n"
        " - You must continue the task index from the end of the previous one. Do not repeat task indices.",
        num_tools=len(tools) + 1,
        tool_descriptions=tool_descriptions,
    )

    def should_replan(state: list):
        # Context is passed as a system message
        return isinstance(state[-1], SystemMessage)

    def wrap_messages(state: list):
        return {"messages": state}

    def wrap_and_get_last_index(state: list):
        next_task = 0
        for message in state[::-1]:
            if isinstance(message, FunctionMessage):
                next_task = message.additional_kwargs["idx"] + 1
                break
        state[-1].content = state[-1].content + f" - Begin counting at : {next_task}"
        return {"messages": state}

    return (
        RunnableBranch(
            (should_replan, wrap_and_get_last_index | replanner_prompt),
            wrap_messages | planner_prompt,
        )
        | llm
        | LLMCompilerPlanParser(tools=tools)
    )

Error Message and Stack Trace (if applicable)

Error: Input variables `replan` are not used in any of the prompt messages.

Description

I'm trying to use the llm compiler example code, but I get the error of 'replan' variable not in the planner_prompt. Checked the wfh/llm-compiler LangShimth's hub template, and I don't see the variable either. __

System Info

windows

isahers1 commented 2 months ago

Thank you for raising this issue - this PR https://github.com/langchain-ai/langgraph/pull/1517 should contain the fix.

maciejNisztuk commented 2 months ago

@isahers1 Thank you for the fast reply and fix. Btw. Is there a plan to translate the Llm Compiler example to Javascript/Typescript LangGraph version?

vbarda commented 2 months ago

Yes, we plan to translate all of these!

McCReuben commented 2 months ago

I appreciate the quick fix on this issue, I was also struggling with it. I have a closely related follow-up question. In the revised code, a replan input variable is still being passed to the base_prompt (see here), and the base_prompt doesn't have an input variable called replan. Is the replan functionality not included in this version? Also, a side question, why isn't base_prompt.partial erroring when being passed a variable it shouldn't recognize?

EDIT: On further inspection, I believe this issue still exists in the current update, but none of the current examples in LLMCompiler.ipynb use the Replan functionality, so it isn't being replicated. I have opened a PR to add an example and fix the issue.

maciejNisztuk commented 2 months ago

@vbarda thanks for the response. Do you have the ETA for this feature? I must admit that this feature would be incredibly helpful for my use case :)