langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.61k stars 2.16k forks source link

Abort when using RunnableSequence #4513

Closed SergioCarrascosaSanchez closed 1 month ago

SergioCarrascosaSanchez commented 8 months ago

I want to abort the token generation with de AbortController when I call abortController.abort(). I could’t find the way to do it with my current implementation of a RunnableSequence.

const model = new ChatOpenAI({…});

const abortController = new AbortController();
model.bind({ signal: abortController.signal });

const chatMessageHistory = new ChatMessageHistory(chatHistory);
const chatHistoryMemory = new BufferMemory({chatHistory: chatMessageHistory,…})

const promptCustomTemplate = ChatPromptTemplate.fromMessages([…])

const newChain = new ConversationChain({
        memory: chatHistoryMemory,
        prompt: promptCustomTemplate,
        llm: model,
});
const runnableConversation = RunnableSequence.from([
      {
        input: (params: { question: string }) => { return question },
        ainame: () => { return name },
        guidelines: () => { return guidelines },
        personality: () => { return personality },
        context: (params: { question: string }) => { … }
       },
      (chatConversationParams: any) => {
        newchain.invoke(chatConversationParams);
      }
    ]);

await runnableConversation.invoke({
      question: input
 });`
dosubot[bot] commented 8 months ago

🤖

Hey @SergioCarrascosaSanchez, good to see you back! 🚀

Based on the information provided, it seems you're trying to abort the token generation process using the AbortController. In your current implementation, you've already bound the AbortController signal to the ChatOpenAI model. However, it seems like you're trying to abort the process within a RunnableSequence, which might not be directly supported.

From the issues I found in the langchainjs repository, there are two that might be relevant to your situation:

  1. Langchain Expression Language / RunnableSequence Does Not Support AbortSignal
  2. Look at implementing "abort" functionality for LLMs (start with openai)

The first issue suggests that you can bind a signal to the LLM in a sequence like this:

const model = llm.bind({ signal: controller.signal });

And the stream function returns an IterableReadableStream, which can be used with an abort signal as follows:

const reader = await chain.stream({});

abortSignal.addEventListener(
  "abort",
  () => {
    void reader.cancel("abort");
  },
  { once: true },
);

The second issue suggests that the abort functionality has been implemented for LLMs, and you might be able to use it in your current implementation.

However, it's important to note that these solutions might not work directly in your case, as your implementation seems to be a bit different. You might need to adapt these solutions to fit your specific use case.

If you're still having trouble implementing the abort functionality, could you provide more details about the specific issues you're facing? This could help in finding a more tailored solution to your problem.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
SergioCarrascosaSanchez commented 8 months ago

@dosu-bot So, there is no way yet to cancel the token generation of a RunnableSequence?

dosubot[bot] commented 8 months ago

🤖

Based on the information you've provided and the similar issues I found in the langchainjs repository, it seems you're on the right track with using AbortController. However, you might need to adjust your implementation slightly to abort the token generation process within a RunnableSequence.

Firstly, you can bind a signal to the LLM in a sequence like this:

const model = llm.bind({ signal: controller.signal });

This will allow the AbortController's signal to be listened to by the language model.

Secondly, the stream function returns an IterableReadableStream, which can be used with an abort signal as follows:

const reader = await chain.stream({});

abortSignal.addEventListener(
  "abort",
  () => {
    void reader.cancel("abort");
  },
  { once: true },
);

In this code, an event listener is added to the abort signal. When the abort signal is triggered (i.e., when abortController.abort() is called), the reader.cancel("abort") method is called, effectively cancelling the token generation process.

This solution was suggested in the following issues:

And the implementation was demonstrated in the following example:

Please try this solution and let me know if it works for you. If you encounter any issues, please provide more details so I can assist you further.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
andrewnguonly commented 7 months ago

@SergioCarrascosaSanchez, this might be a regression with ChatOpenAI, but I'm not 100% sure. I'm also not sure if the AbortController is intended to be used for non-streaming invocations.

  1. The ChatOpenAI non-streaming (i.e. invoke()) invocation logic does not implement abort signal handling. a. The signal is passed to the method completionWithRetry() here. b. But the implementation of completionWithRetry() doesn't do anything with it (see here).
  2. In contrast, the ChatAnthropic class does implement abort signal handling for non-streaming invocation logic. a. The signal is passed to the method completionWithRetry() here. b. Inside the implementation of completionWithRetry(), the signal is passed to the class's AsyncCaller where it's handled in the callWithOptions() method (see here).
DanielZhui commented 5 months ago

@SergioCarrascosaSanchez Hello, can you solve it? I encountered the same problem,

LeoAnt02 commented 4 months ago

+1 Would love a solution for this Currently using a simple invoke can be canceled with a signal (https://js.langchain.com/v0.1/docs/modules/model_io/chat/cancelling_requests/)

but doesnt work with a ConversationChain and RunnableSequence