Mozilla-Ocho / llamafile

Distribute and run LLMs with a single file.
https://llamafile.ai
Other
20.53k stars 1.03k forks source link

two unconditional stray printfs in llamafile/cuda.c #526

Closed leighklotz closed 2 months ago

leighklotz commented 3 months ago

https://github.com/Mozilla-Ocho/llamafile/blob/c39d30c5306432eedebf58bdee6424152a613674/llamafile/cuda.c#L912 https://github.com/Mozilla-Ocho/llamafile/blob/c39d30c5306432eedebf58bdee6424152a613674/llamafile/cuda.c#L913

$ echo 2+3= | /path/to/llamafile -m /path/to/Mistral-7B-Instruct-v0.3-Q6_K.gguf --cli --gpu nvidia -ngl 33 -c 4096 --repeat-penalty 1 -t 10 -f /dev/stdin --silent-prompt --no-display-prompt --log-disable --seed -1
FLAG_nocompile 0
FLAG_recompile 0
5.
...
jart commented 2 months ago

This was fixed before the 0.8.13 release went out. Thanks for the report.