-
Use this thread for general discussion and debate regarding the Character Card Spec V2. **Anyone** may freely use this thread to discuss the spec. However, if you are an owner or representative for a …
-
Compiled both with and without LLAMA_CUDA, when loading the model it just seems to give up somewhere and returns to the prompt
```
PS C:\Users\Drew\Applications\llama.cpp\out\build\x64-Release\bin…
-
in oogabooga > Parametres > Generation
or in file IF_promptMKR_preset.yaml?
-
### Describe the bug
I am running a Gradio application locally, where there's a requests request to a remote server in the click event function of a button, and the result is returned to the componen…
-
### Describe the bug
Traceback (most recent call last):
File “D:\NEW_OOBA\oobabooga\oobabooga_windows\text-generation-webui\[server.py](http://server.py/)”, line 102, in load_model_wrapper
shared.m…
-
### Environment
🐧 Linux
### System
Mint 21.3.0
### Version
latest
### Desktop Information
Nodejs - 12.22.9, oobabooga release version
### Describe the problem
I'm sorry, I d…
-
Anyone else see this issue when using test_inference.py to compute perplexity for any Llama-2 based model that uses rope_scale > 1.0 and context > 4096?
This happens whether I try GPTQ, EXL2 or fp1…
-
Unsure if this is an exllamav2 issue or a llama-cpp issue. (In contrast, GGUF Q8_0 conversion of BF16 worked.)
When I loaded it via ooba/llama-cpp, inference broke when context length exceeded 4K, al…
-
### Describe the bug
after cloning the repository docker is not able to create a docker container because of the missing configuration file.
### Is there an existing issue for this?
- [X] I have se…
-
https://github.com/oobabooga/text-generation-webui/discussions/1933