Open ppabis opened 2 weeks ago
I tested it in another code and it happens only with anthropic.claude-3-haiku-20240307-v1:0
, any other model ID did not cause this issues, including anthropic.claude-3-5-haiku-20241022-v1:0
. I don't know if it is some pattern matching or unfortunate hash somewhere deep inside Python.
Ok, I figured out something - it happens only with anthropic.claude-3-haiku-20240307-v1:0
and it's the only known model in ell
package. It points to bedrock
. However, it forces None
value on the client and disregards default_client
when using ell.init
, thus trying OpenAI
one.
@ppabis This isn’t really a solution but if it helps unstick you until we can fix whatever’s happening you can try supplying the client argument to the decorator directly
I realized you mentioned you tried this but I would expect it to work if it’s on every decorator.
Are you doing any multithreading or multiprocessing or anything?
This comment has the latest update: https://github.com/MadcowD/ell/issues/366#issuecomment-2466455721
Anything below is likely not an issue.
So, when setting custom client, both in
ell.init
and decorator, the change is not always reflected inell
internals. The problem returns at "random" point (because it always happens withanthropic.claude-3-haiku-20240307-v1:0
and not any other model ID), probably when I setbedrock
client and use other model that uses this client.The error I get (when explicitly setting
bedrock
as the client in both decorator andinit
):However, when I create a print statement in
src/ell/lmp/complex.py
: https://github.com/MadcowD/ell/blob/3ce6a21d4bdf64802e4a09368d86d9abc204a9c4/src/ell/lmp/complex.py#L53Everything works and I see the object printed