Open juanbercoff opened 7 months ago
Hey @juanbercoff, good to see you back around here! Hope you're doing well.
Thank you for providing a detailed description of the issue you're encountering with RunnableMap
when using ChatAnthropic
and JsonOutputParser
. This seems to be a complex issue that might be related to how these services handle or format their outputs, especially when tools or functions are involved in the responses.
Given the context and the code snippet you provided, there are a few potential reasons for this behavior:
Output Formatting Differences: ChatAnthropic
and ChatOpenAI
might format their outputs differently, especially when tools or functions are involved. This difference in formatting could affect how JsonOutputToolsParser
parses the output. If ChatAnthropic
's output doesn't match the expected format or includes additional metadata that JsonOutputToolsParser
doesn't account for, it might result in missing or incorrectly parsed outputs.
Tool Invocation Handling: The way ChatAnthropic
handles the invocation of tools within the text might differ from ChatOpenAI
. This difference could affect the output in a way that JsonOutputToolsParser
can't correctly parse it. For example, if ChatAnthropic
includes additional information or uses a different structure to indicate the tool's output, the parser might not recognize or correctly extract the relevant data.
Model Differences: The modelName
specified (claude-3-sonnet-20240229
) might handle the structured output differently than models used with ChatOpenAI
. This could inherently affect how data is structured in the output, impacting the parser's ability to extract the expected information.
Parser Expectations: JsonOutputToolsParser
might have specific expectations about the output's structure that are met by ChatOpenAI
but not by ChatAnthropic
. This could be due to assumptions made during the parser's development about how data is presented in the output.
To address this issue, you might need to:
ChatAnthropic
to identify any differences in structure or formatting compared to ChatOpenAI
.JsonOutputToolsParser
to handle the specific output format or structure provided by ChatAnthropic
, ensuring it can correctly parse and extract the necessary information.You can find more information about how RunnableMap
handles the outputs from different services in the LangChain JS codebase in the RunnableMap test file. This file contains tests that demonstrate how RunnableMap
can handle parallel processing of different services and then aggregate their outputs.
I hope this information helps you understand the potential causes of the issue and possible steps to resolve it. If you have any further questions or need more clarification, please don't hesitate to ask.
My guess is that the Anthropic model isn't returning JSON or the parser is otherwise failing to parse the output. Can you share what output you're seeing from the model?
Hi Jacob,
When running the Anthropic model with the JSON parser and not nested it works as expected, it outputs a json. The issue occurs when nesting as shown on the example.
This is still happening and now is also happening with openai model
Hey, sorry I have just reproed the issue and will look into it.
Ah, no this is intended. The issue is the final step in your RunnableMap
:
{
output: (i) => {
console.log('i', i)
return 'output'
},
},
Because output
is a function there, it becomes a RunnableLambda
, which will gather all steps from the previous step before invoking the function.
There's another construct that isn't in LangChain.js yet that would support what you want to do - will keep this issue open and look into adding it.
Hi Jacob, But why is it working with the StringOutputParser and not with the JSONOutputParser? Using our own custom parser also works.
Really? Are you sure you have the final step as that custom function in both cases?
It doesn't make sense to repeatedly call a function that doesn't return an iterator with chunks so that might be a bug on our end if that's the case.
I'll try to provide another reproduction example as soon as I can.
Hello π, I am part of a group of University of Toronto students looking to investigate this issue. Have you had this issue with any other output parsers other than JSONOutputParser? Was there any specific changes to your chain or the nesting that caused the OpenAI chat model to produce this issue despite it originally working as you intended?
We changed most of our architecture recently and ended up writing our own output parser. So tbh I don't know if this issue is still reproducible with the latest updates.
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
When using the ChatAnthropic model and nesting RunnableSequence with a JsonOutputParser the expected input of the output chain should be
{ else: 'else', runnableSequence: //some output }
, instead only one property appears in the log. This does not happens when using ChatOpenAI for example.This only happens when streaming the final chain, if I invoke the chain both of the outputs appears.
System Info
langchain@0.1.30 windows node v20.11.0 pnpm 8.3.1