FlowiseAI / Flowise

Drag & drop UI to build your customized LLM flow
https://flowiseai.com
Apache License 2.0
30.95k stars 16.11k forks source link

[BUG] Differences between webservice prediction endpoint and langfuse tracing (llmchain/structured ouput parser) #2810

Open koorlan opened 3 months ago

koorlan commented 3 months ago

Describe the bug They are differences between webservice output and langfuse tracing using llmchain and json output

On webservices the structure json output is under a "json" key otherwise in langfuse the json is traced under text key

webservices :

{
    "json": {
        "go_back_to_menu": false,
        "transfer_to_agent": false
    },
    "question": "hey",
    "chatId": "issue-1",
    "chatMessageId": "a057c1b8-82c5-4727-9d99-9c0c20d29995",
    "sessionId": "issue-1"
}

On langfuse ( LLM chain span )

{
text: {
go_back_to_menu: false
transfer_to_agent: false
}
}

Note: I use structured output parser with autofix enabled

To Reproduce Flowise : 1.8.0 LLM chain with output parser ( autofix on)

Use curl (postman) prediction endpoint --> json key

Check on langfuse trace --> text key

Expected behavior Coherent prediction endpoint response and trace on langfuse

Screenshots image image

Additional context Basic chain with json output

image

HenryHengZJ commented 3 months ago

text is the key we used to trace the output - https://github.com/FlowiseAI/Flowise/blob/main/packages/components/src/handler.ts#L455