langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.16k stars 2.05k forks source link

Field "chat_history" in prompt uses a MessagesPlaceholder, which expects an array of BaseMessages as an input value. #2964

Closed mdesousa closed 10 months ago

mdesousa commented 10 months ago

The code below throws an unexpected error:

  const json = [
    {
      type: "human",
      data: { content: "hi my name is Mario", additional_kwargs: {} },
    },
    {
      type: "ai",
      data: {
        content: "Hello, Mario! How can I assist you today?",
        additional_kwargs: {},
      },
    },
  ] as StoredMessage[];
  const messages = json.map((x) => mapStoredMessageToChatMessage(x));
  const ok = messages.every((x) => isBaseMessage(x));
  console.log("ok", ok); // prints true
  const memory = new BufferMemory({
    chatHistory: new ChatMessageHistory(messages),
    memoryKey: "chat_history",
  });
  const executor = await initializeAgentExecutorWithOptions(tools, model, {
    agentType: "chat-conversational-react-description",
    memory,
  });
await executor.call({ input: "what is my name?" })

It looks like this may be related to #2871 ... see stack trace below

Error [InputFormatError]: Error: Field "chat_history" in prompt uses a MessagesPlaceholder, which expects an array of BaseMessages as an input value. Received: Human: hi my name is Mario
AI: Hello, Mario! How can I assist you today?
    at MessagesPlaceholder.validateInputOrThrow (langchain/node_modules/langchain/dist/prompts/chat.cjs:120:27)
    at MessagesPlaceholder.formatMessages (langchain/node_modules/langchain/dist/prompts/chat.cjs:127:14)
    at ChatPromptTemplate.formatMessages (langchain/node_modules/langchain/dist/prompts/chat.cjs:356:53)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async ChatPromptTemplate.formatPromptValue (langchain/node_modules/langchain/dist/prompts/chat.cjs:171:32)
    at async LLMChain._call (langchain/node_modules/langchain/dist/chains/llm_chain.cjs:123:29)
    at async LLMChain.call (langchain/node_modules/langchain/dist/chains/base.cjs:104:28)
    at async LLMChain.predict (langchain/node_modules/langchain/dist/chains/llm_chain.cjs:142:24)
    at async ChatConversationalAgent._plan (langchain/node_modules/langchain/dist/agents/agent.cjs:234:24)
    at async AgentExecutor._call (langchain/node_modules/langchain/dist/agents/executor.cjs:144:26)
dosubot[bot] commented 10 months ago

🤖

Hello,

Based on the error message and the context you've provided, it seems like the issue is related to the format of the chat_history field. In the LangChainJS framework, the chat_history field is expected to be an array of BaseMessage instances. However, in your code, it appears that you're passing an array of objects that are not instances of BaseMessage.

Here's how you can modify your code to use BaseMessage instances:

import { BaseMessage, HumanMessage, AIMessage } from 'langchain/schema';

// ...

const json = [
  new HumanMessage({ content: "hi my name is Mario", additional_kwargs: {} }),
  new AIMessage({ content: "Hello, Mario! How can I assist you today?", additional_kwargs: {} }),
];
const messages = json.map((x) => mapStoredMessageToChatMessage(x));
const ok = messages.every((x) => x instanceof BaseMessage);
console.log("ok", ok); // prints true
const memory = new BufferMemory({
  chatHistory: new ChatMessageHistory(messages),
  memoryKey: "chat_history",
});
const executor = await initializeAgentExecutorWithOptions(tools, model, {
  agentType: "chat-conversational-react-description",
  memory,
});
await executor.call({ input: "what is my name?" })

In this code, new HumanMessage({ content: "hi my name is Mario", additional_kwargs: {} }) and new AIMessage({ content: "Hello, Mario! How can I assist you today?", additional_kwargs: {} }) create instances of HumanMessage and AIMessage, respectively, which are subclasses of BaseMessage.

Please note that the HumanMessage and AIMessage classes require the message content and additional kwargs as parameters. If your chat history includes additional information, you'll need to adjust the code accordingly.

This solution is based on the information provided in the following issues:

I hope this helps! If you have any other questions, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

mdesousa commented 10 months ago

hi just wanted to confirm that this issue does not happen in version 0.0.165 of the langchain package, but it happens in versions 0.0.166 and later (including the latest, 0.0.169). thanks.

@bracesproul @jacoblee93

jacoblee93 commented 10 months ago

This is expected, you'll need to set returnMessages: true on your buffer memory instance.

I'm surprised it was working properly before?

mdesousa commented 10 months ago

ok, it works now with 0.0.169. thanks @jacoblee93

firminovctr commented 6 months ago

@jacoblee93 I have a conversationChain

const chain = new ConversationChain({
    llm: model,
    memory: chatPromptMemory,
    prompt: chatPrompt,
  });

The chatPromptMemory is a ConversationSummaryBufferMemory. You are correct in that you have to have returnMessages:true for it to work, but doesn't that defeat the purpose of using ConversationSummaryBufferMemory since it is returning all messages and using lots of tokens? Is there any way to make it work without returning messages?

 const chatPromptMemory = new ConversationSummaryBufferMemory({
    llm: new ChatOpenAI({
      modelName: "gpt-3.5-turbo-0125",
      temperature: 0,
      openAIApiKey: "OPENAI_API_KEY",
    }),
    maxTokenLimit: 10,
    returnMessages: true,
    chatHistory: messageHistory,
  });

When I tried it said field "history" in prompt uses a messagesplaceholder, which expects an array of basemessages as an input value.

For context, my chatHistory is a FirestoreChatMessageHistory. This is my prompt template:

 const chatPrompt = ChatPromptTemplate.fromMessages([
    SystemMessagePromptTemplate.fromTemplate(
      "You are a helpful assistant"
    ),
    new MessagesPlaceholder("history"),
    HumanMessagePromptTemplate.fromTemplate("{input}"),
  ]);

Thanks in advance!

jacoblee93 commented 6 months ago

Yeah that looks right - if returnMessages is true it should serialize the summary as a message as well.

We're moving away from these memory and chain classes in general as they're a bit tricky to use. See:

https://js.langchain.com/docs/use_cases/chatbots/