ianarawjo / ChainForge

An open-source visual programming environment for battle-testing prompts to LLMs.
https://chainforge.ai/docs
MIT License
2.29k stars 178 forks source link

Chain `PromptNode`s together #18

Closed ianarawjo closed 1 year ago

ianarawjo commented 1 year ago

Add the ability to attach a PromptNode as the input to another PromptNode.

ianarawjo commented 1 year ago

Partly solved by https://github.com/ianarawjo/ChainForge/commit/09871cdc1ff3a0449ae7bf073eb577b261a4fde7

But it would be nice to keep track of "vars" instead of flattening all responses in a list, so that chaining prompts together doesn't lose information about what generated the prompt. For instance, see the following image:

Screen Shot 2023-05-11 at 10 26 38 AM

Notice we've lost track of the "tool" property in the outputs to the right. Ideally, we'd still have access to upstream vars.

ianarawjo commented 1 year ago

This is now fixed —one has access to upstream vars by allowing prompt parameter values to be dicts of form {text, fill_history} and carrying over the fill_history to the new PromptTemplate.

profplum700 commented 1 year ago

I tried to do this but got an error. Am I doing it wrong? Here is a screenshot of the set up:

image

The error shown when I click the play button on my 'CleanUp' Prompt Node is:

Cannot send a prompt 'Analyse the table below of recommendations to improve the requirements of a user story. Identify any duplicates by using the first instance as the template and then adding any further detail from the copies.
Here is the table:
[
{
"Id": 1,
"Criteria": "<redacted>",
"Recommendation": "<redacted>",
"Justification": "<redacted>"
},
{
"Id": 2,
"Criteria": "<redacted>",
"Recommendation": "<redacted>",
"Justification": "<redacted>"
},
{
"Id": 3,
"Criteria": "<redacted>",
"Recommendation": "<redacted>",
"Justification": "<redacted>"
}
]' to LLM: Prompt is a template.

Also, is there a way to amalgamate all the results from my first Prompt Node so the second Prompt Node processes that?

ianarawjo commented 1 year ago

Ah, it seems the issue is caused by the braces { } in the output —we’ll need to escape the braces in the output of prompt nodes. Good catch, I’ll work on this in a moment.

For the aggregation idea, can you open up a separate Issue? Thanks!

ianarawjo commented 1 year ago

Okay, I've found the bug. Will push an update in a minute. Thanks for spotting this!

Any braces in the outputs of Prompt Nodes will now be escaped { and } by default, so there's no confusion with template variables. Note that all escaped braces { and } are replaced for { and } before being sent off as prompts to LLMs.

ianarawjo commented 1 year ago

The bug is now patched in all versions of the app. Thanks again!