-
**Have you searched for similar [bugs](https://github.com/Cohee1207/SillyTavern/issues?q=)?**
Yes
**Describe the bug**
When you enable streaming for Oobabooga, it breaks the emojis and generates …
-
### Describe the bug
Unable to load the model normally, but llama-cpp-python can load the model without issues. I don't know why llama.cpp from text-generation-webui cannot load the model, showing …
-
In generate.py, the bos_token_id=1 and eos_token_id=2,
` model.config.bos_token_id = 1`
` model.config.eos_token_id = 2`
However, in finetune.py, the tokenizer is directly loaded from the o…
-
Giving your fork of oobabot a try today and it seems to be attempting to talk to api/v1/stream which does not look like a valid endpoint when looking at localhost:5000/docs
I believe it should be u…
-
when i load model
```python
model = AutoModelForCausalLM.from_pretrained(model_name, device_map = "auto", torch_dtype = torch.float16)
```
- error
```
Traceback (most recent call last):
File …
-
### Describe the bug
Got a bug when work with https://github.com/oobabooga/text-generation-webui:
```
Loading llama-7b-hf...
Loading model ...
Traceback (most recent call last):
File "C:\Use…
-
# Prerequisites
ROCm 6
# Expected Behavior
Attempting to utitilize llama_cpp_python in OobaBooga Webui
# Current Behavior
It loads the model into VRAM. Then upon trying to infer;
gml…
-
I use Ubuntu, oobabooga webui, cognitivecomputations_dolphin-2.8-mistral-7b-v02, and the latest version (Revert the use of Router, good ole completion works.)
LLM_API_KEY="na"
LLM_BASE_URL="http:/…
-
### Environment
🪟 Windows
### System
Windows 11 (latest update) running through Sillytavern launcher
### Version
SillyTavern 1.12.0 'release' (1d32749ed)
### Desktop Information
- Node.js versi…
-
### Version
VisualStudio Code extension
### Suggestion
Please add support for open LLMs compatible with endpoint API for LLM Studio / ollama / etc.