-
### Discussed in https://github.com/bmachek/lrc-ai-assistant/discussions/3
Originally posted by **FA-UC-HR** November 15, 2024
What do you think about using local / self hosted llms? Like olla…
-
### Describe the bug
When running inference over openAI compatable API with Perplexica or avante.nvim the error sometimes appears, after that happnes it doesn't work anymore until I restart the progr…
-
C:\FORGE\stable-diffusion-webui-forge\extensions\sd-webui-decadetw-auto-prompt-llm\scripts\auto_prompt_llm.py:441: GradioDeprecationWarning: unexpected argument for Slider: hint
llm_text_tempture =…
-
WARNING:[auto-llm]:[][AutoLLM][getReq][llm_text_ur_prompt]A superstar Flirting on stage.
WARNING:[auto-llm]:[][AutoLLM][getReq][Header]{'Content-Type': 'application/json', 'Authorization': 'Bearer lm…
-
can you guys provide a container that has ollama only , the ipex-llm-cpp-inference-xpu has open-webui , but it has an old version since may , and its not working it works but cant chat , open-webui of…
-
Hi, awesome project!
I'm on the doorstep of my first query, but I'm stuck.
This is the Ollama server API endpoint:
```bash
curl http://10.4.0.100:33821/api/version
{"version":"0.4.2"}
```
T…
-
## Installation Method
Podman/docker
## Environment
- **Open WebUI Version:** v0.4.4
- **Ollama (if applicable):** v0.4.2
- **Operating System:** Fedora 41
- **Browser (if applicable):**…
-
分析进度会卡住,换个网络才会继续走进度
2024-11-21 15:05:39.123 | INFO | app.config.config:load_config:22 - load config from file: G:\NarratoAI\NarratoAI/config.toml
2024-11-21 15:05:39.128 | INFO | app.con…
-
Hello.
I'm trying to use Brave Leo AI with Ollama using an Intel GPU.
The instructions from Brave using local LLMs via Ollama are here:
https://brave.com/blog/byom-nightly/
The instructions fr…
-
Nice Tool.
Maybe it is possible to integrate it directly into the expermimental Memory of Open-Webui ?
https://github.com/open-webui/open-webui/blob/main/backend/open_webui/apps/webui/models/m…
dnl13 updated
4 weeks ago