-
**Describe the bug**
```
-- Installing: /wrkdirs/usr/ports/devel/tabby/work/target/release/build/llama-cpp-server-8837603d1835d022/out/bin/llama-tokenize
cargo:root=/wrkdirs/usr/ports/devel/tabb…
-
### What happened?
Hi there.
My llama-server can work well with the following command:
```bash
/llama.cpp-b3985/build_gpu/bin/llama-server -m ../artifact/models/Mistral-7B-Instruct-v0.3.Q4_1.g…
-
### What happened?
I've been trying to get llama-server to log details to a file using the `--logdir` argument. However nothing seems to log at all, not even a log file is created,
### Name and Vers…
-
### What happened?
A bug happened!
### Steps to reproduce
1. step one...
2. step two...
### What OS are you seeing the problem on?
_No response_
### Relevant log output
```shell
invalid utf-…
-
### What happened?
I can successfully build llama.cpp using make, but failing to do it using cmake. I tried this on Ubuntu 22.04 LTS and 24.04 LTS on my Intel Core Ultra 9 185H. It seems that is sea…
-
### Summary
I am a newbie just curious about this. I tried using the script to install on a 2016 MacBook Pro running MacOS 12.7.6 (21H1320). Minor hiccup initially because my Python version was not…
-
### What happened?
We are running llama-server on a Radeon RX 7900 XT, with the command line `./llama-server -t 4 -ngl 50 -c 13000 --host 0.0.0.0 --port 18080 --mlock -m mistral-nemo-instruct-2407-q8…
-
### What is the issue?
When I upgraded the image to 0.4.0, the previous model encountered this error. The overall information is as follows:
```
2024/11/14 11:29:13 routes.go:1189: INFO server co…
-
### What happened?
The transcript of a 1h multi speaker file generates the following output:
00:00 --> 01:20
Speaker 1:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!…
-
Clang doesn't format the files and throw errors that it failed to find imports from external libraries (I use cmake)
```console
#include "sherpa-onnx/c-api/c-api.h"
'sherpa-onnx/c-api/c-api.h' fi…