Closed teddybear082 closed 1 week ago
Hi @teddybear082, we generally recommend installing from source, as this allows llama.cpp to be built with compiler optimizations customized for your system. Pre-built binaries, on the other hand, would either lack these optimizations or necessitate maintaining a wide variety of binaries for different platforms. I can create a docker image if that would make the things a bit easier, but that would only work for Linux/Mac arm64/amd64.
ok thanks for answering! They ultimately got it installed. And I have it working so no need to do anything for me.
I was helping someone else get this working over the last couple of days and it struck me that it would be awesome if there were precompiled binaries with / without cuda for windows / mac / linux the same way there are for koboldcpp and llamacpp. Any chance for that in the future?