langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.82k stars 2.22k forks source link

llm memory: human message appended too late (at end of run) #7255

Closed Morriz closed 4 days ago

Morriz commented 5 days ago

Checked other resources

Example Code

It all revolves around this function in chat_memory:

    async saveContext(inputValues, outputValues) {
        // this is purposefully done in sequence so they're saved in order
        await this.chatHistory.addUserMessage((0, memory_1.getInputValue)(inputValues, this.inputKey));
        await this.chatHistory.addAIChatMessage((0, memory_1.getOutputValue)(outputValues, this.outputKey));
    }

which clearly shows that BY DESIGN langchain chose to insert BOTH messages at the same time, which nukes intermediate steps' attempts to insert other messages AFTER the human's input and BEFORE the final response by the llm.

Error Message and Stack Trace (if applicable)

No response

Description

I'm calling a tools agent and in the tools I insert a tool message into the memory, which erroneously ends up in this order in the memory:

when it clearly should end up like this:

System Info

master (24 nov 2024)

dosubot[bot] commented 5 days ago

I found some relevant information that might help address the issue you're experiencing with the order of messages in chat_memory.

  1. Intermediate Tool Messages Handling: To ensure that intermediate tool messages appear in the correct order in the chat history, LangChain.js recommends aggregating multiple tool messages into a single tool message before adding them to the chat history. This involves checking if consecutive messages are of type "tool" and concatenating their results. Once a non-tool message is encountered, the aggregated tool results are converted into a single message and added to the chat history [1][2].

  2. Order of Message Insertion: The saveContext function in the chat_memory module is designed to save user input and AI output in sequence to maintain the chronological order of interactions. However, if intermediate tool messages need to be inserted between the user input and AI response, you might need to adjust the logic to handle these insertions explicitly [3].

  3. Existing Tests for Message Order: There are tests in LangChain.js that verify the order of messages when intermediate tool messages are involved. These tests ensure that the sequence and content of messages, including those from tools, are correctly handled and formatted [4][5].

To address your specific issue, you might consider modifying the saveContext function to explicitly handle the insertion of tool messages between the user input and AI response. This could involve restructuring how messages are added to the chat history to accommodate intermediate steps.

To continue talking to Dosu, mention @dosu.

Morriz commented 5 days ago

The bot assumes that we have control over the code, but some of us don't, as we might use 3rd party tools that integrate langchain. IMO this is a bug and should be fixed by:

    async saveContext(inputValues, outputValues) {
        // this is purposefully done in sequence so they're saved in order
        if (inputValues) await this.chatHistory.addUserMessage((0, memory_1.getInputValue)(inputValues, this.inputKey));
        if (outputValues) await this.chatHistory.addAIChatMessage((0, memory_1.getOutputValue)(outputValues, this.outputKey));
    }

and just updating part [L107-128] of the code in the base.js to this:

    let outputValues: RunOutput;
    if (!(this.memory == null)) {
      await this.memory.saveContext(
        this._selectMemoryInputs(input)
      );
    }
    try {
      outputValues = await (fullValues.signal
        ? (Promise.race([
            this._call(fullValues as RunInput, runManager, config),
            new Promise((_, reject) => {
              fullValues.signal?.addEventListener("abort", () => {
                reject(new Error("AbortError"));
              });
            }),
          ]) as Promise<RunOutput>)
        : this._call(fullValues as RunInput, runManager, config));
    } catch (e) {
      await runManager?.handleChainError(e);
      throw e;
    }
    if (!(this.memory == null)) {
      await this.memory.saveContext(
        undefined,
        outputValues
      );
    }

Of course any dependent integrations would have to be updated similarly...

jacoblee93 commented 4 days ago

Hi there! We generally suggest not using these classes for numerous issues/differences of opinion on implementation like this. See the following guide:

https://js.langchain.com/docs/versions/migrating_memory/