Closed racampos closed 1 month ago
Thanks for bringing that up! Fortunately litellm was also busy already in the meantime and added some extra support, but I haven't yet tried out, what on our end is needed on top.
My guess is it should mostly just work! Might make sense to expose some new options though, e.g. via env vars
litellm/llms/OpenAI/o1_reasoning.py They do translate parameters, but may need drop_params to be set in OH to prevent errors.
I tested via openrouter.
It sets up but when I ask OpenHands to do something it anwser to me this :
Agent encountered an error while processing the last action. Error: NotFoundError: litellm.NotFoundError: NotFoundError: OpenrouterException - 404 page not found Please try again.
What version do you use, @Volko61 ? It's brand new, it works in the latest development version.
I just figured out that I have this issue on other openrouter models, So it's not the model's fault I'm reinstalling docker desktop and openhand to try again I'm on windows
What version do you use, @Volko61 ? It's brand new, it works in the latest development version.
I still get the issue : Hi! I'm OpenHands, an AI Software Engineer. What would you like to build with me today?
Create a tetris game in python
Agent encountered an error while processing the last action. Error: APIError: litellm.APIError: APIError: OpenrouterException - Connection error. Please try again.
Do you have any idea why ?
(I'm on 0.9 btw, I just copy paste the get started command in WSL)
Here are the logs
INFO: 172.17.0.1:48538 - "GET /api/list-files HTTP/1.1" 200 OK
============== CodeActAgent LEVEL 0 LOCAL STEP 0 GLOBAL STEP 0
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
INFO: 172.17.0.1:48538 - "GET /api/list-files HTTP/1.1" 200 OK
What have you set up in the UI's configuration screen (cog wheel bottom right)?
What have you set up in the UI's configuration screen (cog wheel bottom right)? @tobitege
I tested with openrouter with o1, claude3.5 and I also tested with llama3.1 8B with groq
Something is strange is that when I open it it shows the advanced tab. But even if I go to the non advanced tab and save it does not work and when I reopen the settings, it reopens in advanced tab
If you're using the docker command to start OH, please try ghcr.io/all-hands-ai/openhands:0.9.3 where it says 0.9 or "main" instead of "0.9.3" to get the most current version (and images) and keep us posted if the situation changes.
If you're using the docker command to start OH, please try ghcr.io/all-hands-ai/openhands:0.9.3 where it says 0.9 or "main" instead of "0.9.3" to get the most current version (and images) and keep us posted if the situation changes.
@tobitege I tried with 0.9.3 and got the same issue
If you're using the docker command to start OH, please try ghcr.io/all-hands-ai/openhands:0.9.3 where it says 0.9 or "main" instead of "0.9.3" to get the most current version (and images) and keep us posted if the situation changes.
@tobitege I tried with 0.9.3 and got the same issue
Sorry, forgot to explicitly mention to also remove all OH images you may see in your docker desktop first.
It works!
I had no problem with this setting. I'll update the interface to include it in the default slider.
If you're using the docker command to start OH, please try ghcr.io/all-hands-ai/openhands:0.9.3 where it says 0.9 or "main" instead of "0.9.3" to get the most current version (and images) and keep us posted if the situation changes.
@tobitege I tried with 0.9.3 and got the same issue
Sorry, forgot to explicitly mention to also remove all OH images you may see in your docker desktop first.
@tobitege I've removed all OH images before testing again and same issue
@amanape I think "Base URL" should be the top, then the Model, then the API key, this sounds very good "logically", thx!
My issue just solved him self with :
What problem or use case are you trying to solve? OpenAI's new models
o1-preview
ando1-mini
expose a slightly different API than previous models. Thus, changes need to be made to OpenHand's codebase in order to support these new models.Describe the UX of the solution you'd like The UX stays the same. However, according to OpenAI, these new models introduce a new paradigm of interaction. They now have a built-in chain-of-thought process that might require us to redesign the way the model is prompted.
Do you have thoughts on the technical implementation? Through a trial-and-error process I identified at least five differences with the new API:
role
max_tokens
changed tomax_completion_tokens
temperature
only supports the value 1 (this doesn't make sense to me, maybe with this new paradigm the temperature concept is no longer required?)top_p
only supports the value 1stop
is no longer supportedNote: These are just my empirical observations. There's still no mention of these new models on OpenAI's official documentation for chat. Actually, I think this "issue" should be formally addressed only after the official docs are available and there's more information about the new prompting paradigm for these models.