BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
14.07k stars 1.66k forks source link

[Bug]: `extra_headers` regression on litellm.aembeddings (openai provider) #6836

Open hspak opened 1 day ago

hspak commented 1 day ago

What happened?

At some point between versions 1.41.2 and 1.52.5, litellm stopped supporting extra_headers on litellm.aembeddings for the openai provider.

We depend on this for some custom auth.

Example usage:

response = await litellm.aembedding(
    model=EMBEDDING_MODEL,
    input=strs,
    dimensions=EMBEDDING_NUM_DIMENSIONS,
    metadata=<custom>,
    extra_headers=<custom-headers>,
) 

Relevant log output

No response

Twitter / LinkedIn details

No response

hspak commented 1 day ago

I was able to workaround this by setting litellm.aclient_session = httpx.AsyncClient(headers=headers).

hspak commented 1 day ago

Nvm, litellm.aclient_session seems to lose the authorization header for both completion and embedding calls.