Closed SergioCarrascosaSanchez closed 1 month ago
🤖
Hey @SergioCarrascosaSanchez, good to see you back! 🚀
Based on the information provided, it seems you're trying to abort the token generation process using the AbortController
. In your current implementation, you've already bound the AbortController
signal to the ChatOpenAI
model. However, it seems like you're trying to abort the process within a RunnableSequence
, which might not be directly supported.
From the issues I found in the langchainjs repository, there are two that might be relevant to your situation:
The first issue suggests that you can bind a signal to the LLM in a sequence like this:
const model = llm.bind({ signal: controller.signal });
And the stream
function returns an IterableReadableStream
, which can be used with an abort signal as follows:
const reader = await chain.stream({});
abortSignal.addEventListener(
"abort",
() => {
void reader.cancel("abort");
},
{ once: true },
);
The second issue suggests that the abort functionality has been implemented for LLMs, and you might be able to use it in your current implementation.
However, it's important to note that these solutions might not work directly in your case, as your implementation seems to be a bit different. You might need to adapt these solutions to fit your specific use case.
If you're still having trouble implementing the abort functionality, could you provide more details about the specific issues you're facing? This could help in finding a more tailored solution to your problem.
@dosu-bot So, there is no way yet to cancel the token generation of a RunnableSequence?
🤖
Based on the information you've provided and the similar issues I found in the langchainjs repository, it seems you're on the right track with using AbortController
. However, you might need to adjust your implementation slightly to abort the token generation process within a RunnableSequence
.
Firstly, you can bind a signal to the LLM in a sequence like this:
const model = llm.bind({ signal: controller.signal });
This will allow the AbortController
's signal to be listened to by the language model.
Secondly, the stream
function returns an IterableReadableStream
, which can be used with an abort signal as follows:
const reader = await chain.stream({});
abortSignal.addEventListener(
"abort",
() => {
void reader.cancel("abort");
},
{ once: true },
);
In this code, an event listener is added to the abort signal. When the abort signal is triggered (i.e., when abortController.abort()
is called), the reader.cancel("abort")
method is called, effectively cancelling the token generation process.
This solution was suggested in the following issues:
And the implementation was demonstrated in the following example:
Please try this solution and let me know if it works for you. If you encounter any issues, please provide more details so I can assist you further.
@SergioCarrascosaSanchez, this might be a regression with ChatOpenAI
, but I'm not 100% sure. I'm also not sure if the AbortController
is intended to be used for non-streaming invocations.
ChatOpenAI
non-streaming (i.e. invoke()
) invocation logic does not implement abort signal handling.
a. The signal is passed to the method completionWithRetry()
here.
b. But the implementation of completionWithRetry()
doesn't do anything with it (see here).ChatAnthropic
class does implement abort signal handling for non-streaming invocation logic.
a. The signal is passed to the method completionWithRetry()
here.
b. Inside the implementation of completionWithRetry()
, the signal is passed to the class's AsyncCaller
where it's handled in the callWithOptions()
method (see here).@SergioCarrascosaSanchez Hello, can you solve it? I encountered the same problem,
+1 Would love a solution for this Currently using a simple invoke can be canceled with a signal (https://js.langchain.com/v0.1/docs/modules/model_io/chat/cancelling_requests/)
but doesnt work with a ConversationChain
and RunnableSequence
I want to abort the token generation with de AbortController when I call abortController.abort(). I could’t find the way to do it with my current implementation of a RunnableSequence.