pfrankov / obsidian-local-gpt

Local Ollama and OpenAI-like GPT's assistance for maximum privacy and offline access
MIT License
265 stars 18 forks source link

[feature request] support open-webui as OpenAI compatible server URL #12

Closed Kisaragi-ng closed 3 months ago

Kisaragi-ng commented 4 months ago

Hello, is it possible for this plugin to support API endpoint from open-webui?

I've been using Obsidian for awhile and keep using OpenAI's API, but now I want to run it using my own LLM instances, I choosed this plugin since I saw there's a form to enter my own Ollama url, and OpenAI compatible url.

from open-webui discussion, there's already 2 discussion, but only me whose trying to make it work for obsidian-local-gpt.

This is the configuration that I found and the plugin can fetch my available model:

slash ollama

It does run, but it seems I can't keep the generated text:

https://github.com/open-webui/open-webui/assets/88741696/9dd7f679-91a4-478d-aeb1-d9c75d608c5e

I have obsidian-local-gpt Actions set as default (this happens to be in Obsidian fresh install too, without other plugin installed).

unfortunately I'm not quite sure how to troubleshoot this, nor if I'm capable to make a PR. so if anyone / contributors want to add support for open-webui, I'm willing to lend my instance for testing purposes.

quantarion commented 3 months ago

Open-webui is not a server; it is a client. As the name suggests, it’s only a web user interface originally for Ollama. Ollama is a server that provides its own API and makes it easier to deploy local LLMs. Ollama also provides OpenAI compatibility mode: OpenAI Compatibility Mode.

Ollama has found itself to be rather useless because most of the engines (llama.cpp, exllama, Aphrodite) provide an OpenAI-compatible server too. If you have a GPU with enough VRAM, Ollama is not the best choice at all. Here's a handy guide: Guide to Choosing Quants and Engines.

I can't reproduce the other problem you have, maybe just reinstall the plugin?