-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [X] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
INB4 This is **not about setting Top-P to 1.0** which causes the same output every time for every same prompt, documented here: https://github.com/abetlen/llama-cpp-python/issues/1797
When loading …
-
### Discussed in https://github.com/ggerganov/llama.cpp/discussions/9960
Originally posted by **SteelPh0enix** October 20, 2024
I've been using llama.cpp w/ ROCm 6.1.2 on latest Windows 11 for…
-
# Prerequisites
- I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- I carefully followed the [README.md](https://github.com/abetlen/llama-cpp-pyth…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
### System Info / 系統信息
SERVER:Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
PRETTY_NAME:"Debian GNU/Linux 11 (bullseye)"
python:3.11.5
conda:23.10.0
torch:2.4.1+cpu
### Running Xinference with D…
-
Setting `top_p = 1` causes outputs to be identical even with a random seed. This was discovered by https://github.com/oobabooga/text-generation-webui/issues/6431#issuecomment-2409089861. See the full …
-
1. By using command `CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 python setup.py bdist_wheel`, I can build out a wheel and have it installed as:
```console
llama_cpp:
total 3.8M
-rwxrwxr-x 1 …
-
Whenever I try to ingest my documents. It shows me this error, *WHY*?
It also gives me the solution to `pip install cpp-llama-python`
Can someone tell me what this is used for in the current pr…
-
Hello @lamm-mit ,
I am getting this error on the llama server.
This is after submitting and setling the web server UI
as mentioned in other issues here:
Text Generation Model: openai/custom_model…