dimagi / open-chat-studio

A web based platform for building Chatbots backed by Large Language Models
BSD 3-Clause "New" or "Revised" License
14 stars 7 forks source link

Chat to a pipeline #608

Closed proteusvacuum closed 1 month ago

proteusvacuum commented 1 month ago

Description

This allows connecting an Experiment with a Pipeline. I went with creating the Pipeline runnable, as it seemed like the simplest option, although that required me to make the changes in 41e59850f27205029620ab89236126afc60557ae, which I'd love feedback on (the TopicBot requires an LLM provider to count tokens, etc.). I think the other option, making a PipelineBot might allow us to bypass that.

User Impact

Chat to a pipeline! Once we have optional edges, you could then make router bots, safety bots, and any other complex-type bot :)

I renamed the CreateReport node - I'm not sure if anyone has created pipelines in production with a node like that - if so I'll write a migration to rename those nodes.

Demo

Screencast from 2024-08-14 22-54-14.webm

Docs

bderenzi commented 1 month ago

(very cool!)

proteusvacuum commented 1 month ago

I'm working on getting branching to work in the pipelines, but will do that in a separate PR.

SmittieC commented 1 month ago

My thoughts:

In my mind, theTopicBot class is a hardcoded pipeline. It's running each "bot"/experiment (child bot, terminal bot, main bot) as a separate runnable. I like the new PipelineBot class, I think this makes that distinction clear. I am thinking though that the LLMResponseWithPrompt should also invoke the runnables like we do in the TopicBot class, but ofcourse not in the same "hardcoded" way.

What I am suggesting is that we do the same as what we're doing here by creating a runnable and invoking that, but only now we should be creating and running the runnable inside the _process method of the LLMResponseWithPrompt.

We can also add a new state for the pipeline runnable. This way we can get things like the chat model, the prompt etc from the pipeline instead of the experiment.

I hope this makes sense? I'm not sure if this is the correct way to go, but I have a feeling that it is.

Thoughts? cc @snopoke

snopoke commented 1 month ago

My thoughts:

In my mind, theTopicBot class is a hardcoded pipeline. It's running each "bot"/experiment (child bot, terminal bot, main bot) as a separate runnable. I like the new PipelineBot class, I think this makes that distinction clear. I am thinking though that the LLMResponseWithPrompt should also invoke the runnables like we do in the TopicBot class, but ofcourse not in the same "hardcoded" way.

What I am suggesting is that we do the same as what we're doing here by creating a runnable and invoking that, but only now we should be creating and running the runnable inside the _process method of the LLMResponseWithPrompt.

We can also add a new state for the pipeline runnable. This way we can get things like the chat model, the prompt etc from the pipeline instead of the experiment.

I hope this makes sense? I'm not sure if this is the correct way to go, but I have a feeling that it is.

Thoughts? cc @snopoke

There is some scope for refactoring but I think it should be done separately.

SmittieC commented 1 month ago

My thoughts: In my mind, theTopicBot class is a hardcoded pipeline. It's running each "bot"/experiment (child bot, terminal bot, main bot) as a separate runnable. I like the new PipelineBot class, I think this makes that distinction clear. I am thinking though that the LLMResponseWithPrompt should also invoke the runnables like we do in the TopicBot class, but ofcourse not in the same "hardcoded" way. What I am suggesting is that we do the same as what we're doing here by creating a runnable and invoking that, but only now we should be creating and running the runnable inside the _process method of the LLMResponseWithPrompt. We can also add a new state for the pipeline runnable. This way we can get things like the chat model, the prompt etc from the pipeline instead of the experiment. I hope this makes sense? I'm not sure if this is the correct way to go, but I have a feeling that it is. Thoughts? cc @snopoke

There is some scope for refactoring but I think it should be done separately.

Do you mean the refactoring should be done separately or the pipeline implementation should be done separately? (so we shouldn't have runnables inside the pipeline as I proposed)

snopoke commented 1 month ago

Do you mean the refactoring should be done separately or the pipeline implementation should be done separately? (so we shouldn't have runnables inside the pipeline as I proposed)

I meant the refactoring. I'm not sure exactly what approach makes sense yet. There are also some other changes to the runnables I'm interesting in looking at: