-
GGML version from TheBloke is coming soon: https://huggingface.co/TheBloke/Llama-2-7B-GGML
Thanks!
-
### Describe the bug
When using the latest version after gguf merge (commit 0c9e818bb874df6429bf1c5fe96be6400516abd9), generated text appears to be missing some words.
Example:
> Common sense q…
-
I have a real bug this time.
```
ggml_new_tensor_impl: not enough space in the scratch memory pool (needed 575702016, available 268435456)
Segmentation fault
```
This affects ggml implementat…
-
### Describe the bug
hi,
I successfully setup the environment after run start_macos.sh with the latest version. But when the server starts, it raised the error: TypeError: replace() argument 1 mus…
-
I've been using this chatdocs project with a ggml model which has worked really well if a bit slow. I have read a lot online about GPTQ models delivering significantly better speeds, but when I triale…
-
When I tried upgrading text-generation-webui wheel version to 0.2.17 I noticed that it no longer uses CUDA for inference. Also tested in a smaller project and it behaved the same there. No errors are …
-
How do I specify model() parameters, from here: https://github.com/marella/ctransformers#config
Placing them inside model() does not raise an error, but they are ignored for my mpt model ie. I can …
-
@marella thanks, very usefull project
I use llama ggml model
With latest `llama.cpp` it says `Привіт`, but with ctransformers only `ривіт`
It is related to all words, not only first in a responce.…
-
Operating System: Ubuntu 20.04.6 LTS
Kernel: Linux 5.4.0-146-generic
Architecture: x86-64
configs/local_config.yaml,shows
...
binding_name: llama_cpp_official
...
model_…
-
```
Traceback (most recent call last):
File "~/git/lollms-webui/app.py", line 1668, in
user_avatar_path = lollms_paths.personal_user_infos_path / "default_user.svg"
AttributeError: 'LollmsP…