-
### Describe the bug
llama.cpp models always gives exactly the same output (compared in winmerge to be sure), like they ignore any sampling options and seed. Sometimes the first output after loadin…
-
### Applies To
- [X] Notebooks (.ipynb files)
- [ ] Interactive Window and\/or Cell Scripts (.py files with \#%% markers)
### What happened?
I'm trying to upgrade from the deprecated `registerRemot…
-
Hey I'm trying to install ggml vicuna following this link: https://agi-sphere.com/install-textgen-webui-mac/
The texgen-webui works as I tested it on a small LLM model, however, it doesn't work for…
-
Now LLAMA 3.1 is out, but sadly it is not loadable with current text-generation-webui. I tried to update transformers lib which makes the model loadable, but I further get an error when trying to use …
-
So it seems like the bug is back (maybe Ooba was changed again?).
` File "E:\StableDiffusion\text-generation-webui\text-generation-webui\extensions\complex_memory\script.py", line 56, in save_pairs…
-
尝试使用[text-generation-webui](https://github.com/oobabooga/text-generation-webui)本地部署VisualGLM-6B,加载模型时出错Unrecognized configuration class
具体如下所示:
Traceback (most recent call last): File “/home/xx…
-
Hello, great model and thank you for all of your hard work!
I use the model in GPU mode on windows and it runs very well and I have implemented your model capabilities in an extension for Oobabooga…
-
执行python3 webui.py --port 9886 --model_dir speech_tts/CosyVoice-300M后,在webui上点击任何按钮都会报超时错误,如下图
控制台报错信息如下。该怎么解决啊
ERROR: Exception in ASGI application
Traceback (most recent call last):
File …
-
The following is sometimes happening while completion is ongoing for large context sizes.
- My context size was: 3,262
- The max_new_tokens set: 4,096
```
Traceback (most recent call last):
…
-
Till yesterday everything worked fine, but today, using the same wildcards and prompts it doesn't works and prints this in the terminal each time i start a generation that includes wildcards in it's p…