Closed jordantgh closed 11 months ago
OK, it seems that my hypothesis was right - when you set an openrouter API key, it still draws from the list of models specified by OPENAI_MODELS, and changing this is all you need.
OK, it seems that my hypothesis was right - when you set an openrouter API key, it still draws from the list of models specified by OPENAI_MODELS, and changing this is all you need.
it should be fetching the openrouter api for a list of models to show all of its supported models, was it not doing that for you? they are formatted differently, openai/gpt-xxx
i found that when you make the change and set openrouter, you restart your server as always, but the client/web app page needs to be refreshed once or twice per server restart for the list to update from the default.
In this case, i didn't even have to refresh the page after first server restart, and I don't have OPENAI_MODELS set
Hm, it's not what I see.
Here is how things look with OPENAI_MODELS commented out and OPENROUTER_API_KEY set:
And here is an example where I set both:
.env
UI
Note that I fully restarted the container and clear browser cache after any changes.
I am using Docker Desktop and Windows.
I'd note that it does appear to be working, and the chats are showing up in my openrouter activity.
IDK if this is helpful, but it seems that the model is automatically appended to the openai/
route, hence:
Hm, it's not what I see.
Here is how things look with OPENAI_MODELS commented out and OPENROUTER_API_KEY set:
Try refreshing the page to see if the list updates. Also I guess it appends on their end, that's good to know.
Try refreshing the page to see if the list updates.
As I said, I was fully restarting the container and clearing browser cache after any changes (and reloading, of course). I can confirm that, at least on my system, setting the OPENROUTER_API_KEY variable alone isn't sufficient to populate the web frontend with the list of openrouter models; I require to do this manually.
I haven't set any openai keys in the first place as I just went straight to openrouter. Hence the page still prompts me to Set API Key First
Is it possible there is some path dependency where you are assuming that first an openai key is present and being overwritten?
From quickly scanning around the code base it isn't clear to me how modelsConfig
gets updated if OPENROUTER_API_KEY is set; if you can point me in the right direction I can take a closer look and see if I can't figure something out.
Is it possible there is some path dependency where you are assuming that first an openai key is present and being overwritten?
Oh I see, yes you need an OPENAI_API_KEY variable set, and then models will fetch. The reason is that an api call shouldn't be made if there is no openai key is set, but openrouter should be treated it separately in this regard and not just overwriting this condition
I see - does this need looking at further? I can open another issue if so. Otherwise it's fine. For myself the issue is solved :)
I see - does this need looking at further? I can open another issue if so. Otherwise it's fine. For myself the issue is solved :)
it was a quick fix so im pushing it with my new PR!
Contact Details
No response
What is your question?
I am trying to access the gpt-4-32k model via openrouter. I have set my openrouter api key and from a couple of test messages confirm I am able to communicate with gpt4 via the LibreChat interface:
However, I'd like to access the 32k model. Is it as simple as uncommenting the OPENAI_MODELS key and adding gpt-4-32k? I will try that now and report back, but I thought it's worth logging an issue either way since it's non transparent.
More Details
N/A
What is the main subject of your question?
Documentation, Endpoints
Screenshots
Code of Conduct