Closed krrishdholakia closed 1 month ago
Hi @ducnvu, thanks for using LiteLLM. Any chance we can hop on a call to learn how we can improve LiteLLM Proxy for you ?
We’re planning roadmap and I’d love to get your feedback
my cal for your convenience: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat my linkedin if you prefer DMs: https://www.linkedin.com/in/reffajnaahsi/
The Feature
Azure ai supports content as a string. Openai supports content as a list.
This raises errors when passing straight through
Motivation, pitch
Streamlined way to call vision and non-vision models would be great. Being LLM-agnostic is a big reason why I use the package but currently still have to handle different request format depending on which model it goes to.
For example: Calling GPT4 Vision, messages.content is an array. Using the same code to call Azure's Command R+ would result in
litellm.exceptions.APIError: OpenAIException - Error code: 400 - {'message': 'invalid type: parameter messages.content is of type array but should be of type string.'} I'm aware this is on the model provider's side, but GPT's non-vision models for example support both format.
Twitter / LinkedIn details
cc: @ducnvu