-
https://github.com/oobabooga/text-generation-webui/discussions/1933
-
Could someone check if AI models can be run locally on cards like the 3060M 12GB with FrankenDriver? The easiest way is to try using the stand-alone LMstudio.ai application (closed source), but if som…
-
clear windows installation last oobabooga > git clone bot > install requirements_ext.txt >
`ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. …
-
Unsure if this is an exllamav2 issue or a llama-cpp issue. (In contrast, GGUF Q8_0 conversion of BF16 worked.)
When I loaded it via ooba/llama-cpp, inference broke when context length exceeded 4K, al…
-
Hello, I really want you to know how much I love this extension!!! OMG
-
Hi brucepro
I install your extension (docker version) however keep getting notice no python_on whale when I already install everything in requirement.txt
Here my log confirm that I already insta…
-
============================
CUDA_VERSION: 114
============================
NVCC path: /usr/local/cuda-11.4/bin/nvcc
GPP path: /usr/bin/g++ VERSION: g++ (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
CUDA…
-
### Describe the bug
Whenever I load up certain GGUFs, I get the above error message in the terminal. I have seen it happen on Bartowski Q8 quant of Llama3 70B Instruct (3-part file) and llama-3-70B-…
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to fi…
-
### Describe the bug
As of today, no message is sent back by the AI. Settings are the default Colab/Gradio ones, I dont know how this computer beep-boop works.
### Is there an existing issue for thi…