janhq / cortex.cpp

Run and customize Local LLMs.
https://cortex.so
Apache License 2.0
1.92k stars 105 forks source link

epic: llama.cpp should support `LLVM` for ARM-based CPUs #1251

Open dan-homebrew opened 13 hours ago

dan-homebrew commented 13 hours ago

Goal

Tasklist

dan-homebrew commented 13 hours ago

@eckartal Can you link the Reddit post here (and any other requests for this?)

eckartal commented 8 hours ago

@eckartal Can you link the Reddit post here (and any other requests for this?)

image

Source: https://www.reddit.com/r/LocalLLaMA/comments/1fjpm7j/jan_now_runs_faster_on_cpus/