I have LocalAI as the backend which serves a VL model. LocalAI supports OpenAI gpt-4-vision compatible completions API. I tested the API locally with below request:
The backend log shows that it compiles the image URL into <img>[local file path]</image> based on chat template setting. This is the normal behavior, and the API returns correct answer.
But when I setup LiteLLM to proxy the vision request to this LocalAI API, the image url disappeared, LocalAI log as below, the 
based on chat template setting. This is the normal behavior, and the API returns correct answer.But when I setup LiteLLM to proxy the vision request to this LocalAI API, the image url disappeared, LocalAI log as below, the
<image>
tag is not there.Relevant log output
No response
Twitter / LinkedIn details
No response