ggerganov / llama.cpp

LLM inference in C/C++
MIT License
68.58k stars 9.85k forks source link

Bug: llama-server not logging to file #10078

Open PyroGenesis opened 1 month ago

PyroGenesis commented 1 month ago

What happened?

I've been trying to get llama-server to log details to a file using the --logdir argument. However nothing seems to log at all, not even a log file is created,

Name and Version

Llama-cli: version: 3849 (8277a817) built with MSVC 19.29.30154.0 for x64

llama-server: ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no ggml_cuda_init: found 2 CUDA devices: Device 0: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: no Device 1: NVIDIA A100 80GB PCIe, compute capability 8.0, VMM: no version: 3980 (896b6ede) built with MSVC 19.41.34120.0 for x64

What operating system are you seeing the problem on?

Windows

Relevant log output

No response

morgen52 commented 1 month ago

The same issue appeared in Linux. I do not know what is the expected behavior of --logdir, and I haven’t seen the YAML logs mentioned in the documentation, but the --log-file option seems to work fine.

ggerganov commented 1 month ago

Use the --log-file FNAME argument. The --logdir LOGDIR is for YAML logs which are not relevant for llama-server.

morgen52 commented 1 month ago

Could you please provide an example of the YAML logs? In what situations are they generated?

JohannesGaessler commented 1 month ago

The --logdir LOGDIR is for YAML logs which are not relevant for llama-server.

I haven't gotten around to it but I think --logdir should be removed again. It's horribly outdated and by now there are better alternatives for the things I was using it for.