Open NinjAiBot opened 1 year ago
Please check this comment https://github.com/mudler/LocalAI/issues/1197#issuecomment-1779573484
I was able to solve it by linking protoc again.
brew link protobuf
make clean
make BUILD_TYPE=metal build
Is it succeed to build? It is ok we get the Warning
while the building process.
Is it succeed to build? It is ok we get the
Warning
while the building process.
No.
I've never managed to get the build to get past this point. It always just seems to stop at this point and never progress.
Built with make
and saw he same error OP saw which I worked around by:
BUILD_GRPC_FOR_BACKEND_LLAMA=on make backend/cpp/llama/grpc-server
And then rerunning the build
target to complete compilation on an M2 chip running Fedora Asahi Remix.
@NinjAiBot my output looks just like yours, and it's working for me—I just followed the next steps in Example: Build on mac to download ggml-gpt4all-j.bin and ask it how it was. Try it! Thanks @renzo4web for the brew link protobuf
step which fixed the build for me.
Just used LM-Studio instead. Was the easiest way to spin up a server to chat to a model which is what I needed to do
LocalAI version: Most recent as of this report
Environment, CPU architecture, OS, and Version:
Describe the bug Running the installer from the official documentation fails for macOS running ARM64 fails at this part:
cd llama.cpp && mkdir -p build && cd build && cmake .. -DLLAMA_METAL=OFF && cmake --build . --config Release
To Reproduce Follow these steps on an M2 max mbp
Expected behavior A successful install
Logs
Full error: