Closed alperyilmaz closed 2 months ago
This should work out of the box! As it's just a new model name, you can run interpreter --model openai/gpt-4o
. By the next update, we will set this as the default model.
Let me know if that works @alperyilmaz, and thanks for opening this!
It works! I have a question about how things work, especially about images. When I paste an image and ask "what do you see?" I was expecting that the image is sent to openAI model gpt-4o and then gpt-4o can reply with what it sees in the image. But when I ask this question through open-interpreter, it lays out a plan like this:
So, if I understand correctly, it's not possible to take advantage of vision capabilities of gpt-4o. Did I understand that correctly? Or, am I using wrong prompts?
@KillianLucas I think @alperyilmaz didn't run into problem to just chat with the new model, but when run OI with i --vision
param would override the model setting of gpt-4o
in profile with gpt-4-vision-preview
(luckily, only for current conversation, not really modifying the profile file even there's an unexpected prompt "We have updated our profile file format. Would you like to migrate your profile file to the new format? No data will be lost."). This usually means the new model gpt-4o
is not in the list of models that support vision mode defined in OI source code.
I checked the source code found that if the --vision
param was passed when launch, OI will load settings from a normal user unaccessible profile vision.yaml
which will set the model to gpt-4-vision-preview
. And because the version code of vision.yaml
is still 0.2.1
(latest is 0.2.5
), so it prints the profile migration prompt. BTW, the checking of if a model is a vision model is done with litellm.supports_vision
.
Just checked the latest version 1.37.16
of litellm
, they have added the support of gpt-4o
, so litellm.supports_vision
should work correctly. However, OI still try to use libraries like PIL to analyze the image locally on user's machine. Worth more investigation.
And gpt-4o-mini but it looks like that's already included, so this issue can be closed?
Is your feature request related to a problem? Please describe.
This feature request is not related to a problem. Support for gpt-4o will make replies faster and also cheaper.. And maybe in feature it might allow open interpreter to work with sound/voice..
Describe the solution you'd like
Either "-vi" option might directly mean "gpt-4o" model instead of "gpt-4-vision-preview" OR a new argument can be added to support "gpt-4o" specifically
Describe alternatives you've considered
No response
Additional context
No response