Azure ai supports content as a string. Openai supports content as a list.
This raises errors when passing straight through
Motivation, pitch
Streamlined way to call vision and non-vision models would be great. Being LLM-agnostic is a big reason why I use the package but currently still have to handle different request format depending on which model it goes to.
For example: Calling GPT4 Vision, messages.content is an array. Using the same code to call Azure's Command R+ would result in
litellm.exceptions.APIError: OpenAIException - Error code: 400 - {'message': 'invalid type: parameter messages.content is of type array but
should be of type string.'}
I'm aware this is on the model provider's side, but GPT's non-vision models for example support both format.
The Feature
Azure ai supports content as a string. Openai supports content as a list.
This raises errors when passing straight through
Motivation, pitch
Streamlined way to call vision and non-vision models would be great. Being LLM-agnostic is a big reason why I use the package but currently still have to handle different request format depending on which model it goes to.
For example: Calling GPT4 Vision, messages.content is an array. Using the same code to call Azure's Command R+ would result in
litellm.exceptions.APIError: OpenAIException - Error code: 400 - {'message': 'invalid type: parameter messages.content is of type array but should be of type string.'} I'm aware this is on the model provider's side, but GPT's non-vision models for example support both format.
Twitter / LinkedIn details
cc: @ducnvu