Mozilla-Ocho / llamafile

Distribute and run LLMs with a single file.
https://llamafile.ai
Other
20.58k stars 1.04k forks source link

Fix GPU Layer Limitation in llamafile #534

Closed BIGPPWONG closed 2 weeks ago

BIGPPWONG commented 3 months ago

533

In the current implementation, the line n_gpu_layers = std::min(n_gpu_layers, (int)hparams.n_layer); restricts the minimum value of n_gpu_layers. However, in the llama.cpp project, within the static void llm_load_hparams function, hparams.n_layer is derived from ml.get_key(LLM_KV_BLOCK_COUNT, hparams.n_layer);, which only accounts for layers that require key-value (KV) attention and does not include other potential layers, such as output layers.

This restriction might lead to performance issues, as observed in the token generation speed and GPU utilization.

By either commenting out this line or adjusting it to hparams.n_layer + 10, the issue can be mitigated, ensuring all necessary layers are properly offloaded to the GPU, improving overall performance.

cjpais commented 1 month ago

Following up on this, this is the same issue mentioned in DM @jart. Removing this line should be sufficient as far as I can tell