wandb / openui

OpenUI let's you describe UI using your imagination, then see it rendered live.
https://openui.fly.dev
Apache License 2.0
18.53k stars 1.69k forks source link

Add LM Studio Support #74

Open Pils10 opened 4 months ago

Pils10 commented 4 months ago

I really like this project, but I prefer to run my local models through LM-Studio rather than Ollama. Primarly because of their simple-to-use GUI that tells me exactly, what models can run / what quantitation I need to run these models on my GPU. If you guys have time, please add LM Studio support to this project. Thanks in advance.

vanpelt commented 4 months ago

I haven't used it, will take a look. Community contributions are also very welcome if anyone feels inspired to dive into the source or ask Devin to do it 😜

mmuyakwa commented 4 months ago

https://github.com/wandb/openui/issues/91#issuecomment-2099240831

File server.py in the folder backend/openui - line 68

Set the base_url to whatever you wish.

openai = AsyncOpenAI(
    base_url="https://YOUR-URL/v1"
)  # AsyncOpenAI(base_url="http://127.0.0.1:11434/v1")
ollama = AsyncClient()
router = APIRouter()
session_store = DBSessionStore()
github_sso = GithubSSO(
    config.GITHUB_CLIENT_ID, config.GITHUB_CLIENT_SECRET, f"{config.HOST}/v1/callback"
)
Replikate648 commented 3 months ago

#91 (comment)

File server.py in the folder backend/openui - line 68

Set the base_url to whatever you wish.

openai = AsyncOpenAI(
    base_url="https://YOUR-URL/v1"
)  # AsyncOpenAI(base_url="http://127.0.0.1:11434/v1")
ollama = AsyncClient()
router = APIRouter()
session_store = DBSessionStore()
github_sso = GithubSSO(
    config.GITHUB_CLIENT_ID, config.GITHUB_CLIENT_SECRET, f"{config.HOST}/v1/callback"
)

Did not work for me. Could you please explain further? I set it like this image

But got: image

Replikate648 commented 3 months ago

Update

I managed to make it work with lmstudio, follow the instructions below:

  1. open the server.py in the backend directory of your openui folder
  2. modify from lines 68, just copy and paste these:

openai = AsyncOpenAI( base_url="http://localhost:1234/v1" )

  1. now type:

(openui) H:\openui\backend>set OPENAI_API_KEY=http://localhost:1234/v1

(openui) H:\openui\backend>echo %OPENAI_API_KEY% http://localhost:1234/v1

  1. start your local server in lmstudio (chat python) with a model loaded on.

  2. (openui) H:\openui\backend>python -m openui

  3. ENJOY

image