-
### Describe the bug
When running inference over openAI compatable API with Perplexica or avante.nvim the error sometimes appears, after that happnes it doesn't work anymore until I restart the progr…
-
C:\FORGE\stable-diffusion-webui-forge\extensions\sd-webui-decadetw-auto-prompt-llm\scripts\auto_prompt_llm.py:441: GradioDeprecationWarning: unexpected argument for Slider: hint
llm_text_tempture =…
-
WARNING:[auto-llm]:[][AutoLLM][getReq][llm_text_ur_prompt]A superstar Flirting on stage.
WARNING:[auto-llm]:[][AutoLLM][getReq][Header]{'Content-Type': 'application/json', 'Authorization': 'Bearer lm…
-
can you guys provide a container that has ollama only , the ipex-llm-cpp-inference-xpu has open-webui , but it has an old version since may , and its not working it works but cant chat , open-webui of…
-
Hi, awesome project!
I'm on the doorstep of my first query, but I'm stuck.
This is the Ollama server API endpoint:
```bash
curl http://10.4.0.100:33821/api/version
{"version":"0.4.2"}
```
T…
-
Nice Tool.
Maybe it is possible to integrate it directly into the expermimental Memory of Open-Webui ?
https://github.com/open-webui/open-webui/blob/main/backend/open_webui/apps/webui/models/m…
dnl13 updated
3 weeks ago
-
Hello.
I'm trying to use Brave Leo AI with Ollama using an Intel GPU.
The instructions from Brave using local LLMs via Ollama are here:
https://brave.com/blog/byom-nightly/
The instructions fr…
-
分析进度会卡住,换个网络才会继续走进度
2024-11-21 15:05:39.123 | INFO | app.config.config:load_config:22 - load config from file: G:\NarratoAI\NarratoAI/config.toml
2024-11-21 15:05:39.128 | INFO | app.con…
-
尝试过在model_settings.yaml里配置default_llm,但运行chatchat init的时候还是会给我改回qwen 7b。
chatchat init
2024-10-28 11:55:21.198 | WARNING | chatchat.server.utils:get_default_llm:205 - default llm model qwen2-vl-i…
-
Hi,
after instaling all nessesery soft - LM studio and run server 1234 , i get this log in stable diffusion latest version (Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2)…