-
Build errors extensive. Here's the log, please advise:
```
┌[loch@Harbinger] [master] [1627871380]
└[~/Git/frequensea-git]> makepkg --force --install -s
==> Making package: frequensea-git 3…
-
### Discussed in https://github.com/ggerganov/llama.cpp/discussions/9228
Originally posted by **bulaikexiansheng** August 29, 2024
I try to use the speculative decoding script, the command is …
-
### Feature request
Could gpt4all be adapted so that llama.cpp can be launched with x number of layers offloaded to the GPU?
At the moment, it is either all or nothing, complete GPU-offloading or …
-
### What happened?
I have a rpc server 10.90.26.1:50052, it's OK with following command.
./llama-cli -m /data/zsq/models/qwen2-7b-instruct-q8_0.gguf --repeat_penalty 1.0 --color -i -r "User:" -f pro…
-
### Discussed in https://github.com/ggerganov/llama.cpp/discussions/8704
Originally posted by **ElaineWu66** July 26, 2024
I am trying to compile and run llama.cpp demo on my android device (Q…
-
### Summary:
When testing the latest version of llama-cpp-python (0.1.64) alongside [the corresponding commit of llama.cpp](https://github.com/ggerganov/llama.cpp/tree/8596af427722775f0df4a7c90b9af06…
-
### Which application or package is this feature request for?
discord.js
### Feature
I was having some problems deploying my app commands in my server; they weren't deploying at all and I was not g…
-
### What happened?
According to [Homebrew llama.cpp pull-request history](https://github.com/Homebrew/homebrew-core/pulls?q=is%3Apr+llama) the [llama.cpp formula](https://formulae.brew.sh/formula/l…
-
### What happened?
For Intel dGPU like ARC770, the tokens per second doesn't scale with increasing batch size. For example if tps for batch size 1 is ~x tps, for batch size 8 also throughput is ~x tp…
-
So I am on Manjaro, done everything as guide said but when I do `systemctl status tf2richpresence --user` it shows
```bash
[rafii2198@Rafii-Manjaro tf2disc-linux]$ systemctl status tf2richpresence -…