Open Stadly opened 1 month ago
Hey there, @Stadly! I'm here to assist you with any issues, questions, or contributions you have for the LangChain.js repository. Let's work together to solve this bug!
To resolve the issue with RunnableWithMessageHistory
when using RunnablePassthrough.assign
, you need to ensure that the RunnablePassthrough.assign
is properly configured to handle both the message history and the response. The error you're encountering suggests that the input structure is not as expected.
Here's a revised version of your code that should work correctly:
Ensure the RunnablePassthrough.assign
is correctly configured:
assign
method should correctly map the input and output keys.Check the _enterHistory
and _exitHistory
methods:
Here is the updated code:
import {
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableWithMessageHistory,
} from "@langchain/core/runnables";
import { FakeListChatMessageHistory } from "@langchain/core/utils/testing";
import { ChatOpenAI } from "@langchain/openai";
const prompt = ChatPromptTemplate.fromMessages([
new MessagesPlaceholder("history"),
HumanMessagePromptTemplate.fromTemplate("{question}"),
]);
const model = new ChatOpenAI({ modelName: "gpt-4" });
// The idea is that the response is assigned to the input of the runnable.
// The result should be that the output contains both `history` and `response`.
const runnable = RunnablePassthrough.assign({ response: prompt.pipe(model) });
const messageHistory = new FakeListChatMessageHistory();
const chain = new RunnableWithMessageHistory({
runnable,
getMessageHistory: () => messageHistory,
inputMessagesKey: "question",
outputMessagesKey: "response",
historyMessagesKey: "history",
});
const response = await chain.invoke(
{ question: "foo" },
{ configurable: { sessionId: "1" } }
);
console.log(response);
RunnablePassthrough.assign:
assign
method correctly maps the response
to the output of the prompt.pipe(model)
.RunnableWithMessageHistory Configuration:
inputMessagesKey
, outputMessagesKey
, and historyMessagesKey
are correctly set to handle the message history and response.Error Handling:
RunnableWithMessageHistory
is correctly formatted.By ensuring these configurations, you should be able to correctly return both the message history and the response when using RunnablePassthrough.assign
with RunnableWithMessageHistory
[1][2].
To continue talking to Dosu, mention @dosu.
When looking at the trace for this example, I noticed the addition of an input
key there as well. Maybe this indicates the same underlying issue, even though the example works?
When looking at insertHistory
in the trace, I would have expected the input to be { "input": "What's my name?" }
, but it is logged as { "input": { "input": "What's my name?" } }
.
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
The issue arises in https://github.com/langchain-ai/langchainjs/blob/4d65a34ecb47ca24952dc36c725a68b31cfc2e2b/langchain-core/src/runnables/history.ts#L264
inputs
should have been{ question: '0', history: [] }
, but it is{ input: { question: '0', history: [] } }
.It works correctly if I remove
RunnablePassthrough.assign
and just doconst runnable = prompt.pipe(model);
, but then I can't get hold of the message history:Stack trace:
Description
I'm trying to get
RunnableWithMessageHistory
to return both the message history and the response. The idea is to useRunnablePassthrough.assign
to retain the history passed to the runnable:I would expect the output when invoking the chain to contain both
history
andresponse
.What actual happens is that an error is output, and neither the
question
orresponse
are added to the message history.System Info
linux @langchain/core@0.2.6 @langchain/openai@0.1.2 node@v20.14.0