langchain-ai / langserve

LangServe 🦜️🏓
Other
1.88k stars 210 forks source link

How to do Human in Loop in /stream Endpoint #313

Open wolvever opened 9 months ago

wolvever commented 9 months ago

LangServe can stream output using /stream Endpoint. However, if I want to send an event and wait for human feedback, wait for a parameter value or confirm, for example. It seems impossible to use SSE only. Users have to do some conversation keeping and implement coversations in two functions, one returns an generator, save conversation to cache and load them in another request handler.

Is there any demo to implement SSE and Human in Loop at the same time. If not, how do we implement that without websockets.

eyurtsev commented 9 months ago

We can't do it with the streaming endpoint right now since that's just one way communication from server to client.

Right now, the way to achieve this is by issuing a new request for every human in the loop action.

Options are either:

  1. Parameterize the inputs to the runnable to take the relevant context and store the session information on the client side
  2. Persist the relevant session on the backend

Also more context will be super helpful in terms of expanding support in langserve/langchain:

1) What should the human in the loop be able to do? Approve/edit/change tools used? 2) Should control be configurable per tool? 3) Are you referencing any existing workflows supported by langchain?