Mozilla-Ocho / llamafile

Distribute and run LLMs with a single file.
https://llamafile.ai
Other
20.58k stars 1.04k forks source link

Enable GPU support in llama-bench #581

Closed cjpais closed 1 month ago

cjpais commented 1 month ago

Previously the --ngl flag would never take effect for llama-bench

This change fixes this behavior, allowing the GPU to be used.

This change currently does not fix the outstanding bug mentioned in #577. That bug is that not all the GPU layers are offloaded. This is fixed by #534.