Open xingyaoww opened 3 weeks ago
OpenHands started fixing the issue! You can monitor the progress here.
An attempt was made to automatically fix this issue, but it was unsuccessful. A branch named 'openhands-fix-issue-4809' has been created with the attempted changes. You can view the branch here. Manual intervention may be required.
I think a problem we might have here, just as we have for prompt caching, is that there are providers of some well known models (including claude), which don't support one or both of these. I seem to recall that Sonnet is on vertex (...I think? I didn't try it there), and it doesn't support prompt caching.
In theory, the same could be for vision. To clarify, for vision I don't know of a case, but in the future we should probably consider another solution here. Shouldn't litellm take the provider into account when it returns supports_thing
?
@enyst yeah litellm supposed to handle it.... until you have a lot of providers that make things tricky 😢 i don't really blame them lol
Is there an existing issue for the same bug?
Describe the bug and reproduction steps
I got the following error when using model
litellm_proxy/claude-3-5-sonnet-20241022
through a LiteLLM proxy. It supposed to support vision inputs.In L345-L348 openhands/llm/llm.py, maybe we should also check for
litellm.support_vision
formodel_name.split('/')[-1]
.OpenHands Installation
Docker command in README
OpenHands Version
No response
Operating System
None
Logs, Errors, Screenshots, and Additional Context
No response