-
> Flexible Backend: While text-gen-webui is the default, Patense.local can work with any backend LLM server.
Is it possible to use [Snorkle.local](https://snorkle.local/) with Ollama? Can you provi…
-
### Describe the bug
When accessing Bolt through a remote URL, Ollama models are not visible in the web UI, despite both services being individually accessible remotely. The models appear correctly…
-
### Describe the bug
I'm experiencing an unexpected behavior when trying to load the following model:
Model name: Mistral-Large-Instruct-2407-IMat-GGUF
Quantization: Q6_K, size 100.59GB
When…
ro99 updated
3 months ago
-
Error occurred when executing OmostLLMLoaderNode:
'llama'
File "C:\baidu\novelai-webui\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = ge…
-
# What is the issue?
## Description
**Bug Summary:**
System Prompts can not work on the first round.
**Actual Behavior:**
For a specific task scenario, there might be special System Prompts…
-
Probably gonna shortlist some wonky idea, but hey if this tool will be workable anywhere it better be feature-full
- [ ] Finetuning and LoRA (or other PEFT type) training toolkit https://github.com…
-
將 Open WebUI 的功能做在側邊欄上。
# To-Do
- [ ] add: 同步 Open WebUI 的對話
- [x] Open WebUI -> Extension (3db7e90a0e8a9cde7174cf3cfa45e46917a8e438)
- [ ] Extension -> Open WebUI
- [ ] add: 夾帶檔案
- [ ] ad…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
A Feature List of everything being added in the short to long term future.
**Suggestions Welcome**
I will move things off this list into individual tickets to be worked on as time permits.:
```…
-
**env:**
transformers ==4.35.2
ctransformers==0.2.27+cu121
```
from ctransformers import AutoModelForCausalLM, AutoTokenizer
model_name = "/home/me/project/search_engine/text-generation-web…