BerriAI / litellm

Python SDK, Proxy Server (LLM Gateway) to call 100+ LLM APIs in OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.54k stars 1.46k forks source link

[Bug]: Amazon Bedrock Llama3 throws exception #4438

Closed grav closed 3 months ago

grav commented 3 months ago

What happened?

Using this config snippet:

  - model_name: bedrock/llama3-70b-8192
    litellm_params:
      model: "bedrock/meta.llama3-70b-instruct-v1:0"
      aws_region_name: "eu-west-2"

I get the following exception (and an HTTP 500 in the proxy) when trying to use /completion:

  File "/home/grav/beyond/code/src/python/.venv/lib/python3.12/site-packages/litellm/main.py", line 2821, in aembedding
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/home/grav/beyond/code/src/python/.venv/lib/python3.12/site-packages/litellm/utils.py", line 9996, in exception_type
    raise e
  File "/home/grav/beyond/code/src/python/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8997, in exception_type
    raise ServiceUnavailableError(
litellm.exceptions.ServiceUnavailableError: BedrockException - cannot access local variable 'data' where it is not associated with a value LiteLLM Retried: 1 times, LiteLLM Max Retries: 2

I've had luck with using the Embedding models, so I suspect there's a bug in the way response data is transformed?

Relevant log output

No response

Twitter / LinkedIn details

No response

grav commented 3 months ago

Ah, sorry, my test was flawed, I was accidentially invoking the /embeddings endpoint 🤦

Probably a 4xx would have been helpful, but nonetheless an error on my side :)

Closing.