FlowiseAI / Flowise

Drag & drop UI to build your customized LLM flow
https://flowiseai.com
Apache License 2.0
30.14k stars 15.57k forks source link

[FEATURE] Allow hiding LLM/Agent Output from Message History #3262

Open Banbury opened 1 day ago

Banbury commented 1 day ago

I am currently building an agent flow, where an LLM is deciding what to do next. The LLM generates JSON for a Condition node, and the Condition will reroute the flow based on the input of the LLM. There is really no need for the output of the LLM to be added to the message history. Not only would it safe tokens to be able to hide the output of the LLM, but the unnecessary messages actually make it harder for following LLMs to 'understand' the context.

I would like to see a simple checkbox "Do not copy to history" on LLMs and Agents. If checked the output is handed over to the next node as usual, but it doesn't appear in the chat history.

HenryHengZJ commented 1 day ago

if you are using embedded, you can turn it off by showAgentMessages: false: https://github.com/FlowiseAI/FlowiseChatEmbed?tab=readme-ov-file#configuration

Banbury commented 1 day ago

@HenryHengZJ I don't want to hide the output from me, but from the other LLMs down the line in the workflow. I want to decide, which messages are added to the history and which are not.

Banbury commented 1 day ago

Or add a node that allows removing messages from the state. Langgraph has an example here.