-
Linux on AMD64, debian jessie
complete output from frequensea/build/make here: (errors at bottom and I am lost as to how to correct this)
david@Minnie:~/frequensea/build$ make
Scanning dependencies …
-
### What happened?
after compiling the llava and running it on Windows, it is very slow because using the CPU, I don't know how can I switch to GPU acceleration.
### Name and Version
version: …
-
### Describe the bug
Crash with abort when trying to use AMD graphics card in editor
Model is mistral-7b-instruct-v0.2.Q4_K_M.gguf
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX…
-
### Context
Using latest 17e98d4c96a583d420f12046bc92102381dbd28e llama.cpp server.
Server started with a llama70b-F16 like model:
```shell
server \
--model model-f16.gguf \
--ctx-size 32…
-
Hello,
I've been trying to run it on Windows with gpu and different models.
I tried Cuda 11.8 and 12.2 with pytorch compiled with matching version but wasn't able to even chat with a model.
Is it…
-
I'm running Ubuntu 22.04 in a virtual machine on a windows host. I am using X11 instead of Wayland, though I've tried both.
I've made sure to install the latest GLFW (3.3.7) and I took out "-Wall -W…
-
### Title:
Issue with Passing Custom Arguments to `llama_cpp.server` in Docker
#### Issue Description:
Hello `abetlen`,
I've been trying to use your Docker image `ghcr.io/abetlen/llama-cpp-pyt…
-
### Share your progress and tweet link:
- Day 1 tweet
- https://twitter.com/chraem2/status/1421802287849115660?s=20
- Description
- Started exploring a python library called pygame.
- Resou…
-
Running on a vanilla Debian
```
$ python run_inference.py -p "Daniel went back to the the the garden. Mary travelled to the kitchen. Sandra journeyed to the kitchen. Sandra went to the hallway. John …
-
@duncanpeacock
Relates to #681