Open luizzappa opened 3 months ago
Thank you for providing a detailed description of the issue you're facing with the AbortController
in LangChain JS. I understand that the abort signal is not being sent as part of the request options, causing unnecessary token consumption on the OpenAI side.
Based on the context provided, it appears that the AgentExecutor
class in the LangChain JS codebase does not currently support the use of an AbortController
signal in its invoke
method. The code does not reference AbortController
or its signal
property, which are typically used to abort a fetch request or similar asynchronous operations in JavaScript. However, the AgentExecutor
class does have a mechanism to handle abort signals, implemented through the earlyStoppingMethod
property and the _stop
method within the AgentExecutorIterator
class. This mechanism allows the AgentExecutor
to handle situations where the execution needs to be aborted or stopped early.
The invoke
method of the AgentExecutor
class does accept an options
parameter that could potentially include an AbortSignal
. This is evident from the signature of the _streamIterator
method, which is indirectly called by invoke
. The _streamIterator
method accepts an options
parameter of type Partial<RunnableConfig>
, and within the context of TypeScript and JavaScript, an AbortSignal
could be part of this configuration object to control the cancellation of asynchronous operations.
However, the provided code does not explicitly mention AbortSignal
within the RunnableConfig
type or its usage directly in the invoke
method. The inclusion of AbortSignal
would depend on the definition of RunnableConfig
and how the LangChain framework or consuming code handles abort signals in asynchronous operations.
In terms of similar issues, there are two that might be relevant:
Langchain Expression Language / RunnableSequence Does Not Support AbortSignal - This issue suggests binding a signal to the LLM in a sequence and using an IterableReadableStream
with an abort signal. However, this solution might not be directly applicable to your case as it involves the RunnableSequence
class, not the AgentExecutor
class.
Calling abortController.abort() does not stop the token stream. - This issue suggests updating the LangChain JS framework to version v0.0.84 and reviewing the updated documentation on how to cancel requests. However, you've mentioned that updating to the latest stable version of LangChain JS did not resolve the issue.
Given the current state of the LangChain JS codebase and the related issues, it seems that the AbortController
signal might not be fully supported in the AgentExecutor
class. This could be a limitation of the current implementation or a potential area for enhancement in future versions of the LangChain JS framework.
I hope this information helps clarify the situation. If you have further questions or need additional assistance, please don't hesitate to ask.
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
When aborting an abort controller I would expect this signal to be sent in the request to openai. But nothing is sent, I inspected what is going for openai, and the
options
parameter above does not contain the signal property:https://github.com/langchain-ai/langchainjs/blob/d6e25af137873493d30bdf5732d46b842e421ffa/libs/langchain-openai/src/llms.ts#L480-L483
The abort signal only stops the execution of the langchain agent, but on the openai side it continues to generate a response that is not necessary and consumes tokens.
System Info
langchain@0.1.31