Closed 69popovvlad closed 8 months ago
🤖
Hello @69popovvlad!
I'm Dosu, a bot here to assist you with your LangChain issues, questions, and contributions while the human maintainers are busy. I'm here to make your experience with LangChain as smooth as possible. Let's get started, shall we?
Based on the error message you're receiving, it seems like the issue is related to the format of the 'messages' array when using 'response_format' of type 'json_object'. The error message suggests that the 'messages' array must contain the word 'json' in some form.
From the context provided, the 'messages' array in the LangChain codebase when using 'response_format' of type 'json_object' is expected to be a list of dictionaries, where each dictionary represents a message with keys like "from", "body", "subject", "date", "to", "cc", and "bcc". However, in your 'LLM run with input' section, the 'messages' array seems to be a list of lists of dictionaries, which might be causing the issue.
Here's the relevant code snippet that generates the expected format:
output_messages = []
for message in messages:
output_message = {}
output_message["from"] = message.sender
if truncate:
output_message["body"] = message.body_preview[:truncate_limit]
else:
output_message["body"] = clean_body(message.body)
output_message["subject"] = message.subject
output_message["date"] = message.modified.strftime(UTC_FORMAT)
output_message["to"] = [str(recipient) for recipient in message.to._recipients]
output_message["cc"] = [str(recipient) for recipient in message.cc._recipients]
output_message["bcc"] = [str(recipient) for recipient in message.bcc._recipients]
output_messages.append(output_message)
Source: libs/langchain/langchain/tools/office365/messages_search.py
I would suggest adjusting the format of your 'messages' array to match the expected format. If you're still encountering issues after making these adjustments, please provide more information about the structure of your 'messages' array and the exact steps you're taking when encountering this error.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot I also use the code below to create a prompt, which I then pass into the ConversationChain. But the messages are still of the array type. How can I create a prompt in this case?
const prompt = PromptTemplate.fromTemplate(args.prompt);
const memory = new ConversationSummaryMemory({
llm: this.llm,
memoryKey: 'chat_history',
inputKey: 'message',
});
this.chain = new ConversationChain({
llm: this.llm,
prompt,
memory,
verbose: true,
})
🤖
Based on the error message you're receiving, it seems that the 'messages' array needs to contain the word 'json' in some form when using 'response_format' of type 'json_object'. This suggests that the 'messages' array should be a list of dictionaries, where each dictionary represents a message.
In the LangChain Python framework, the 'messages' array is a sequence of BaseMessage objects. Each BaseMessage object can be an instance of one of the following classes: AIMessage, HumanMessage, ChatMessage, SystemMessage, FunctionMessage, or ToolMessage. Each message object has a 'content' attribute and may have additional attributes depending on its type.
To modify the format of your 'messages' array to be compatible with the 'response_format' of type 'json_object', you need to convert the list of BaseMessage objects to a list of dictionaries. This can be done using the '_message_to_dict' function that is already imported from the 'langchain.schema.messages' module.
Here is how you can modify the 'messages' property:
@property
def messages(self) -> List[dict]: # type: ignore
r = self._client.data().query(
self._table_name,
payload={
"filter": {
"sessionId": self._session_id,
},
"sort": {"xata.createdAt": "asc"},
},
)
if r.status_code != 200:
raise Exception(f"Error running query: {r.status_code} {r}")
msgs = [
_message_to_dict(
BaseMessage(
type=m["type"],
data={
"content": m["content"],
"role": m.get("role"),
"name": m.get("name"),
"additional_kwargs": json.loads(m["additionalKwargs"]),
},
)
)
for m in r["records"]
]
return msgs
In this modification, instead of using 'messages_from_dict' to convert the list of dictionaries to a list of BaseMessage objects, we are using '_message_to_dict' to convert each BaseMessage object to a dictionary. The return type of the 'messages' property is also changed to 'List[dict]'.
You can find more information about this in the LangChain repository.
Please try this modification and let me know if it resolves your issue. If you have any further questions or need additional clarification, feel free to ask.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
If I create a new OpenAi instance for memory it works, but the "messages" are still an array of arrays type and each response has a different json scheme not matching zod scheme
this.llm = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
modelName: 'gpt-3.5-turbo-1106',
modelKwargs: {
response_format: {
type: 'json_object',
},
},
});
this.llmMemory = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
modelName: 'gpt-3.5-turbo',
});
const prompt = PromptTemplate.fromTemplate(args.prompt);
const memory = new ConversationSummaryMemory({
llm: this.llmMemory,
memoryKey: 'chat_history',
inputKey: 'message',
});
this.chain = new ConversationChain({
llm: this.llm,
prompt,
memory,
verbose: true,
})
Issue you'd like to raise.
Environment
Prompt
Creating model code
Send message code
LLM run with input
Error:
400 'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'.
Suggestion:
No response