Closed znwilkins closed 2 months ago
Thanks @znwilkins, and apologies for the issue.
To clarify things: the model
parameter is now marked as required in the API specification and 5.2.x versions of the SDK. However, 5.1.x versions of the SDK still work as the API itself will be backwards compatible for a short while.
We're actively working on supporting 5.2.x versions in langchain-cohere
- ETA is next week (we plan to use the model
parameter if set, otherwise we'll fetch the default from our list models endpoint.
If you'd like support in the mean time, downgrading to cohere-python 5.1.x is hopefully an acceptable short term solution.
Thanks for the explanation @harry-cohere! Sounds like you folks are on the ball. Downgrading to 5.1x does indeed work for now. 👍
Hello @znwilkins
This is now fixed in langchain-cohere
version 0.1.3
- it supports Cohere SDK versions 5.3+
Migrating from the Cohere integration in langchain-community
is hopefully straightforward, there's a migration guide here.
Thanks for your patience and please reach out to let us know how it goes
Hi @harry-cohere, thanks for the update! I can confirm that using Cohere SDK v5.3.2 and langchain-cohere v0.1.3 this issue has been resolved.
I do notice this log message:
/home/zachary/.cache/pypoetry/virtualenvs/MY_VENV/lib/python3.10/site-packages/cohere/client.py:217: RuntimeWarning: coroutine 'local_tokenize' was never awaited
opts["additional_headers"]["sdk-api-warning-message"] = "offline_tokenizer_failed"
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Breaking and printing the exception at that point shows:
Traceback (most recent call last):
File "/home/zachary/.cache/pypoetry/virtualenvs/MY_VENV/lib/python3.10/site-packages/cohere/client.py", line 213, in tokenize
tokens = asyncio.run(local_tokenizers.local_tokenize(self, text=text, model=model))
File "/usr/lib/python3.10/asyncio/runners.py", line 33, in run
raise RuntimeError(
RuntimeError: asyncio.run() cannot be called from a running event loop
I am calling my chain within a FastAPI application, so looks like a nested eventloop situation. It doesn't affect my LCEL chains though, and that's the way we're moving long-term.
In any case, this is just a warning, so I'm a happy camper! Thanks again!
@znwilkins thank you for the detailed reply. very helpful! working on a fix for the warning now. this should be non-blocking, as it's falling back to call the API to count the tokens.
@znwilkins This is now fixed in 5.3.4 Thank you for reporting:)
Using a LangChain
ConversationalRetrievalChain
, we fail on a call to tokenize because of a missing model parameter:Background
This PR made the
model: str
parameter totokenize()
mandatory, and is included starting incohere-python
v5.2.0.I was using
cohere-python
v5.2.5,langchain-core
v0.1.42,langchain-community
v0.0.32.langchain-cohere
is calling the function without the argument (link), which then throws an error:Suggested Fix
The Cohere documentation still says that this argument is optional, even though it isn't. If it is indeed mandatory, then the fix is probably to choose a reasonable default while allowing the user to override.
Thanks, I'm happy to provide more info as needed.