-
I tried to use vscord but when trying to edit code it says:
`Couldn't connect to Discord via RPC: RPC_CONNECTION_TIMEOUT: Connection timed out`
I do have the necessary settings enabled in discord:…
-
### Which package has the bugs?
The core library
### Issue description
1. slash command
2. try to trigger one in a user channel
3. shows the error but works in a guild channel
### Code sample
…
tr7ma updated
4 months ago
-
### What happened?
Using: https://huggingface.co/bartowski/Hermes-2-Theta-Llama-3-8B-GGUF/blob/main/Hermes-2-Theta-Llama-3-8B-Q6_K.gguf
**llama-cli**
```bash
./llama-cli -m ~/data/models/Hermes-…
-
### What happened?
Hi there.
I got an unexpected slot_id and responses when sending 4 concurrent requests to a llama-server started with:
```bash
./llama.cpp-b3938/build_gpu/bin/llama-server -…
-
Hello,
Is there a way to create an embedding model object (with `LiteLLMEmbeddingModel()` i guess) from a locally served embedding model.
For more precision I run in parallel :
- 'Mixtral-7x…
-
Didn't realize till now, but the port number doesn't work, not sure if I should open another issue about this, but from what I can tell the lancache-dns docker doesn't allow custom ports on the Upstre…
-
### Contact Details
hxyz@protonmail.com
### What happened?
I expected llamafile to offload compute to the GPU when running as a systemd service file, but that didn't happen.
Here's the systemd s…
-
EDIT: Overview has been moved to wiki. See https://github.com/RLBot/wiki/pull/7
---
This issue is a WIP description of v5 (beta) and will contain
- Our reasoning for creating v5 and an overvie…
-
### What happened?
I can run llamafile fine when I type the following command on my ubuntu terminal:
`/llamafile.exe -m /model.gguf --server --nobrowser`
Using the subprocess module, I want t…
-
### What happened?
After building the SYCL server image, trying to load a model larger than Q4 on my Arc A770 fails with a memory error.
Anything below Q4 will execute, but this is due to the "llm_l…