-
You should probably work with [open-webui](https://github.com/open-webui/open-webui) to have a nice easy to use "call" like interface.
-
**describe the definition of done**
- [ ] build openwebui integration
- [ ] launch it on https://www.reddit.com/r/LocalLLaMA/ (just a post, video, or screenshot!)
let's discuss if the code sh…
-
Connecting/using openwebui in the left bar should be the perfect thing. I want to add my own models that are working in my pc and openwebui is perfect for it. It can download-start ollama models. Inte…
-
I was messing with LMStudio's local server and connecting with openwebui and it was really convenient to connect the two. I don't really need the LMStudio application and would love to just use FastM…
-
Update Please Openwebui mixture agent if you live^^
"""
MODERATION TEAM NOTE:
WE'VE HAD SEVERAL REPORTS THAT THIS FUNCTION NO LONGER WORKS ON LATER VERSIONS OF OPENWEBUI.
WE INVITE THE AUTHOR T…
-
Hi,
My config: A770 + Ollama + OpenWebui + intelanalytics/ipex-llm-inference-cpp-xpu:latest docker
After 2-3 chat message I get this error:
```ollama_llama_server: /home/runner/_work/llm.cpp/llm.…
-
I'm trying to deploy OpenWebUI via the Helm Chart with persistence enabled. However, when I do so, the pod won't start, reporting:
```
File "/usr/local/lib/python3.11/site-packages/peewee.py", li…
-
This project is truly impressive. Based on my own experiments, the possibilities are limitless, and it already works exceptionally well. I have a couple of ideas to further enhance its capabilities:
…
mhavo updated
3 months ago
-
### Checklist
- [X] I have filled out the template to the best of my ability.
- [X] This only contains 1 feature request (if you have multiple feature requests, open one feature request for each feat…
-
According to [docs.openwebui.com/api](https://docs.openwebui.com/api/#swagger-documentation-links), I understand there should be documents at `ai.my.com/docs`, but all i see is a 404: Not Found. I'd …