[X] I added a very descriptive title to this issue.
[X] I searched the LangChain documentation with the integrated search.
[X] I used the GitHub search to find a similar question and didn't find it.
[X] I am sure that this is a bug in LangChain rather than my code.
[X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
Example Code
from langchain_mistralai.chat_models import ChatMistralAI
chain = ChatMistralAI(streaming=True)
# Add a callback
chain.ainvoke(..)
# Before
# Oberve on_llm_new_token with callback
# That give the token in grouped format.
# With my pull request
# Oberve on_llm_new_token with callback
# Now, the callback is given as streaming tokens, before it was in grouped format.
Error Message and Stack Trace (if applicable)
No message.
Description
Hello
I Identified an issue in the mistral package where the callback streaming (see on_llm_new_token) was not functioning correctly when the streaming parameter was set to True and call with ainvoke.
The root cause of the problem was the streaming not taking into account. ( I think it's an oversight )
To resolve the issue, I added the streaming attribut.
Now, the callback with streaming works as expected when the streaming parameter is set to True.
I addressed this issue because the pull request I submitted a month ago has not received any attention. Additionally, the problem reappears in each new version.
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No message.
Description
Hello
ainvoke
.I did this Pull Request
streaming
attribut.I addressed this issue because the pull request I submitted a month ago has not received any attention. Additionally, the problem reappears in each new version.
Could you please review the pull request.
System Info
All system can reproduce.