langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.43k stars 2.1k forks source link

The wrong names are used for the user and assistant for openai when using ChatPromptTemplate.fromMessages #5231

Closed a-i-joe closed 2 months ago

a-i-joe commented 5 months ago

Checked other resources

Example Code

const chatModel = new ChatOpenAI({..});

const prompt = ChatPromptTemplate.fromMessages([
    new HumanMessage("hello"),
    new AIMessage("hello"),
    new HumanMessage("how are you"),
]);

const strOutputParser = new StringOutputParser();
const chain = prompt
    .pipe(chatModel)
    .pipe(strOutputParser);

const response = await chain.invoke({});

Error Message and Stack Trace (if applicable)

No response

Description

When I log the output (handleLLMStart, _prompts)

Expected:

user: hello assistant: hello user: how are you

Actual:

Human: hello AI: hello Human: how are you

I have also tried using MessageHistory, and manually specifying all messages with ["assistant", "hello"] format messages.

I found this, here https://github.com/langchain-ai/langchainjs/blob/e0f5668d3437fad01851fd8472e10705a27d120b/langchain/src/experimental/chat_models/bittensor.ts#L61C1-L74C1

  messageToOpenAIRole(message: BaseMessage) {
    const type = message._getType();
    switch (type) {
      case "system":
        return "system";
      case "ai":
        return "assistant";
      case "human":
        return "user";
      default:
        return "user";
    }
  }

Which seems to indicate the right names were intended here, but presumably this is not being utilized correctly.

System Info

node --version v20.12.2

"langchain": "0.1.36"

dosubot[bot] commented 5 months ago

The behavior you're observing with the labels "Human" and "AI" in the ChatPromptTemplate.fromMessages method is by design in the LangChain.js framework. This method uses specific classes to represent human and AI messages, which inherently label messages as "Human" and "AI" based on the type of message they are meant to represent.

To customize these labels, LangChain.js provides a configuration option through the role parameter when creating instances of ChatMessagePromptTemplate. This allows for specifying the role of each message in the chat prompt, enabling customization of the labels used for different participants in the chat.

It's important to note that the MessageHistory class and other related classes in LangChain.js handle the labeling of messages through a sophisticated mechanism that involves multiple message types. This mechanism is designed to map the roles within the chat interaction context in the LangChain.js framework intentionally.

If you're experiencing issues with conversation flow or response generation, it could be due to several factors related to the integration of ChatOpenAI and ChatPromptTemplate.fromMessages. There are several alternative methods and classes in LangChain.js that could be used for creating chat prompts with custom labels, such as MessagesPlaceholder, ChatMessagePromptTemplate, and ChatPromptTemplate.fromTemplate.

The StringOutputParser class is designed to handle the output from a language model call and convert it into a string format, which could influence the final output seen after language model processing.

For further customization or if you encounter any specific issues with the output or behavior of these methods, reviewing the LangChain.js documentation or exploring alternative methods within the framework might provide additional insights or solutions.

#### Details **Sources** - [Langsmith Hub Javascript Inline doesn't work](https://github.com/langchain-ai/langchainjs/issues/4915) - [langchain-core/src/prompts/chat.ts](https://github.com/langchain-ai/langchainjs/blob/main/langchain-core/src/prompts/chat.ts) - [langchain-core/src/chat_history.ts](https://github.com/langchain-ai/langchainjs/blob/main/langchain-core/src/chat_history.ts) - [langchain-core/src/messages/utils.ts](https://github.com/langchain-ai/langchainjs/blob/main/langchain-core/src/messages/utils.ts) - [langchain-core/src/prompts/tests/chat.test.ts](https://github.com/langchain-ai/langchainjs/blob/main/langchain-core/src/prompts/tests/chat.test.ts) - [libs/langchain-openai/src/chat_models.ts](https://github.com/langchain-ai/langchainjs/blob/main/libs/langchain-openai/src/chat_models.ts) - [examples/src/chat/memory.ts](https://github.com/langchain-ai/langchainjs/blob/main/examples/src/chat/memory.ts) - [examples/src/chat/overview.ts](https://github.com/langchain-ai/langchainjs/blob/main/examples/src/chat/overview.ts) - [examples/src/prompts/prompts.ts](https://github.com/langchain-ai/langchainjs/blob/main/examples/src/prompts/prompts.ts) - [langchain-core/src/output_parsers/string.ts](https://github.com/langchain-ai/langchainjs/blob/main/langchain-core/src/output_parsers/string.ts) - [examples/src/get_started/quickstart.ts](https://github.com/langchain-ai/langchainjs/blob/main/examples/src/get_started/quickstart.ts)
**About Dosu** This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.