Open mlamina opened 6 months ago
The get_openai_callback
function is defined in the langchain_community.callbacks.manager
module. This function returns a generator of OpenAICallbackHandler
, which is intended to be used with a context manager to track the number of tokens and costs incurred when using OpenAI models.
From the issue description, it seems like the problem arises when trying to use get_openai_callback
in conjunction with ChatOpenAI
from langchain_openai
. The user mentions that ChatOpenAI
does not call ainvoke
as expected, which might be the root cause of the issue where the callback handler is not being triggered properly, resulting in zero tokens and costs being reported.
To address this issue, it would be necessary to investigate how ChatOpenAI
from langchain_openai
is implemented, especially focusing on its interaction with callback handlers and the ainvoke
method. Ensuring that ChatOpenAI
properly triggers the callback handlers during its execution would be crucial to resolving the reported problem.
I've investigated the classes and functions mentioned in the issue:
AsyncCallbackHandler is defined in /libs/core/langchain_core/callbacks/base.py
. This class is a base for creating asynchronous callback handlers that can be used to perform actions at different stages of processing within LangChain.
ChatOpenAI is defined in two places:
/libs/partners/openai/langchain_openai/chat_models/base.py
: This version of ChatOpenAI
is part of the langchain_openai
module, which is likely designed for integration with OpenAI's API./libs/community/langchain_community/chat_models/openai.py
: Another version of ChatOpenAI
exists within the langchain_community
module. It's possible that this version is tailored for community contributions or specific use cases.get_openai_callback is defined in /libs/community/langchain_community/callbacks/manager.py
. This function returns a generator of OpenAICallbackHandler
, intended for use with a context manager to track the number of tokens and costs incurred when using OpenAI models.
Given the issue description, it seems the problem might stem from how ChatOpenAI
interacts with callback handlers, particularly in the context of asynchronous invocation (ainvoke
). To further investigate and address the issue, it would be necessary to examine the implementation details of ChatOpenAI
in both mentioned locations, as well as how get_openai_callback
is designed to interact with these components.
Unfortunately, due to the extensive content of the files, a detailed line-by-line analysis was not feasible within this response. However, this overview should provide a starting point for further investigation into the issue.
I attempted to scrape the content from the URL you provided, but it seems the page content is not accessible in a way that allows me to retrieve the specific issue discussion you're referring to. This could be due to restrictions on scraping GitHub content or the specific structure of GitHub pages.
If you have specific questions or topics discussed in that issue that you're interested in, I recommend summarizing them or providing more context so I can assist you better with the information or solutions you're seeking.
Original Issue: https://github.com/langchain-ai/langchain/issues/16798
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
I'm trying to use the get_openai_callback from langchain_community.callbacks to get the number of token and costs incurred in using the agent but I am getting zero on everything, as you can see here when I print.
image
I have also set up a custom callback handler to go deep into the issue and what I found is that ChatOpenAI from langchain_openai does not call ainvoke as ChatOpenAI langchain.chat_models did.
THank you for your help
System Info
python 3.11.5