ggerganov / llama.cpp

LLM inference in C/C++
MIT License
67.99k stars 9.75k forks source link

Bug: vulkan backend segfaults due to logging change #10376

Closed audiovention closed 1 hour ago

audiovention commented 1 hour ago

What happened?

The vulkan backend crashes for me on Ubuntu with an amd 7840u integrated GPU, while working ok with an RTX3090. I've debugged the issue and found the broken commit to be 3225008973579cc6a784890c237e1bfc9de41819. Specifically, if I comment out the lines

GGML_LOG_DEBUG("ggml_vulkan: %d = %s (%s) | uma: %d | fp16: %d | warp size: %d\n", idx, device_name.c_str(), driver_props.driverName, uma, fp16, subgroup_size); everything works fine. Suspect some zero-pointer deref in the logging but I haven't investigated further. This commit breaks all version afterwards.

Name and Version

version: 4090 (32250089)

as mentioned this specific commits breaks everything afterwards

What operating system are you seeing the problem on?

Linux

Relevant log output

No response

audiovention commented 1 hour ago

Nevermind, was fixed in https://github.com/ggerganov/llama.cpp/commit/9b75f03cd2ec9cc482084049d87a0f08f9f01517 as I was typing out my bug report