mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
https://localai.io
MIT License
23.12k stars 1.75k forks source link

Error building on MBA M2 #713

Open MC-Bourguiba opened 1 year ago

MC-Bourguiba commented 1 year ago

LocalAI version: commit 3829aba869f8925dde7a1c9f280a4718dda3a18c/ docker 6102e12c4df1

Environment, CPU architecture, OS, and Version: MacBook Air M2, Ventura 13.4

Describe the bug Unable to build either locally or using docker. Both methods yield the same error.

To Reproduce

Expected behavior

Logs

ar src libbloomz.a bloomz.o ggml.o utils.o
touch prepare
I local-ai build info:
I BUILD_TYPE: 
I GO_TAGS: 
I LD_FLAGS:  -X "github.com/go-skynet/LocalAI/internal.Version=v1.20.1-1-g3829aba-dirty" -X "github.com/go-skynet/LocalAI/internal.Commit=3829aba869f8925dde7a1c9f280a4718dda3a18c"
CGO_LDFLAGS="" C_INCLUDE_PATH=/Users/chedly/LocalAI/go-llama:/Users/chedly/LocalAI/go-stable-diffusion/:/Users/chedly/LocalAI/gpt4all/gpt4all-bindings/golang/:/Users/chedly/LocalAI/go-ggml-transformers:/Users/chedly/LocalAI/go-rwkv:/Users/chedly/LocalAI/whisper.cpp:/Users/chedly/LocalAI/go-bert:/Users/chedly/LocalAI/bloomz LIBRARY_PATH=/Users/chedly/LocalAI/go-piper:/Users/chedly/LocalAI/go-llama:/Users/chedly/LocalAI/go-stable-diffusion/:/Users/chedly/LocalAI/gpt4all/gpt4all-bindings/golang/:/Users/chedly/LocalAI/go-ggml-transformers:/Users/chedly/LocalAI/go-rwkv:/Users/chedly/LocalAI/whisper.cpp:/Users/chedly/LocalAI/go-bert:/Users/chedly/LocalAI/bloomz go build -ldflags " -X "github.com/go-skynet/LocalAI/internal.Version=v1.20.1-1-g3829aba-dirty" -X "github.com/go-skynet/LocalAI/internal.Commit=3829aba869f8925dde7a1c9f280a4718dda3a18c"" -tags "" -o local-ai ./
# github.com/go-skynet/go-llama.cpp
binding.cpp:634:15: warning: 'llama_init_from_file' is deprecated: please use llama_load_model_from_file combined with llama_new_context_with_model instead [-Wdeprecated-declarations]
go-llama/llama.cpp/llama.h:162:15: note: 'llama_init_from_file' has been explicitly marked deprecated here
go-llama/llama.cpp/llama.h:30:56: note: expanded from macro 'DEPRECATED'
# github.com/go-skynet/go-bert.cpp
In file included from gobert.cpp:6:
go-bert/bert.cpp/bert.cpp:692:74: warning: format specifies type 'int' but the argument has type 'unsigned long' [-Wformat]
# github.com/go-skynet/go-ggml-transformers.cpp
replit.cpp:65:50: warning: format specifies type 'int' but the argument has type 'std::vector<unsigned long>::value_type' (aka 'unsigned long') [-Wformat]
# github.com/go-skynet/LocalAI
/opt/homebrew/Cellar/go/1.20.5/libexec/pkg/tool/darwin_arm64/link: running c++ failed: exit status 1
duplicate symbol '_set_numa_thread_affinity' in:
    /Users/chedly/LocalAI/go-ggml-transformers/libtransformers.a(ggml.o)
    /Users/chedly/LocalAI/go-llama/libbinding.a(ggml.o)
duplicate symbol '_clear_numa_thread_affinity' in:
    /Users/chedly/LocalAI/go-ggml-transformers/libtransformers.a(ggml.o)
    /Users/chedly/LocalAI/go-llama/libbinding.a(ggml.o)
ld: 2 duplicate symbols for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

make: *** [build] Error 1

Additional context Please let me know if you need more context, this is my first ever issue on github. Thank you!

Oneil974 commented 1 year ago

Same error

localai-bot commented 1 year ago

:warning::warning::warning::warning::warning:

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

_but.... I can also be funny or helpful :smilecat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me, I don't support (yet) replying!

:warning::warning::warning::warning::warning:

Based on the error message, it seems that there are duplicate symbols in the ggml.o and ggml.a files generated by the go-ggml-transformers and go-llama packages respectively. This could be caused by different versions of the same library being included in the build process. To troubleshoot this issue, you can try the following steps:

  1. Check the version of the go-ggml-transformers and go-llama packages installed on your system and ensure that they are compatible with each other. You can use the go list command to check the versions of the packages.
  2. Try excluding the ggml.o file from the build process by adding the following line to the LD_FLAGS variable:
    CGO_LDFLAGS="${CGO_LDFLAGS} -Wl,-rpath=/usr/local/lib"

    This will tell the linker to only use the libraries installed in /usr/local/lib and exclude any other libraries that might be included in the build process.

  3. If the above steps do not work, you can try building the LocalAI package from source using the go get command instead of the go build command. This will ensure that the latest version of the packages are installed and may help to resolve any compatibility issues. I hope this helps! Let me know if you have any further questions or need more assistance.

Sources:

neversettle7 commented 1 year ago

Same issue

wysstartgo commented 1 year ago

I can not build it in M2 Macbookpro with arm64 too

Aisuko commented 11 months ago

HI, guys, Still cannot build succeed? Here is the document. I hope it will helpful. https://localai.io/basics/build/index.html

ericblue commented 10 months ago

Hi. I'm on an M2 Macbook Pro and have also been unable to get a completely successful build going. I think there are a few different issues going on with creating the archive, and the majority seems to be with cmake mistakenly identifying the architecture as x86_64 and not arm64. I'm not sure if this is your specific case though, but do suggest trying another route.

There are a bunch of documented workarounds with flags and options to set, but found the solution was to not use cmake via brew.

I uninstalled cmake and installed directly via https://cmake.org/download/. To access CLI tools just make sure that your path is updated:

export PATH="/Applications/CMake.app/Contents/bin:$PATH"

Running a clean make and then with:

LLAMA_METAL=1 make

This resulted in the majority of repos compiling successfully. I'm currently wresting with grpc compiling successfully with llama2.cpp and dealing with some missing includes. Otherwise using the official version of cmake and go (as opposed to the brew versions) appears to working.

Aisuko commented 10 months ago

If you are failed on building grpc on macOS. Checking here https://github.com/mudler/LocalAI/issues/1197#issuecomment-1779573484

ericblue commented 10 months ago

Great, thanks @Aisuko ! I was trying some other env variables to append the incldue path but missed these.

Adding the include dirs for the brew installed protobuf, grpc and abseil appears to have got things much further. e.g.

export CPLUS_INCLUDE_PATH=/usr/local/opt/protobuf@21/include:/usr/local/opt/grpc/include/ export C_INCLUDE_PATH=/usr/local/opt/protobuf@21/include:/usr/local/opt/grpc/include/:/usr/local/opt/abseil/include/

However, on linking am getting:

ld: symbol(s) not found for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[4]: [bin/grpc-server] Error 1 make[3]: [examples/grpc-server/CMakeFiles/grpc-server.dir/all] Error 2 make[2]: [all] Error 2 make[1]: [grpc-server] Error 2

I realize this are likely issues with llama2 or grpc itself, so appreciate any advice. I am curious about the challenges here -are these all likely due to something specific with M1 or M2 macs?

magesh83 commented 8 months ago

@ericblue Did you fix this issue, facing the same issue in Macbook M2. Unable to get past it.

ericblue commented 8 months ago

Hi @magesh83 I spent a couple hours trying to work around this shortly after my last message, but unfortunately I made no progress. I had to pause work on this and was hopeful by the time I got back to it the LocalAI team might have resolved. The build instructions right now for mac/M1/M2 are sparse and it's not clear if this is not building on all current M1/M2 macs with Sonoma, or if it is something peculiar with build environments.

@Aisuko Can you provide some guidance here?

vhscom commented 8 months ago

I've been able to clean build using the makefile in a single shot on a MacBook Air M2 running Fedora Asahi Remix and flag to build the GRPC backend. The trickiest part was getting the OpenCV header includes linked when enabling stable diffusion. Would like to see metal support added for aarch64 machines. 🙏

ChristianWeyer commented 8 months ago

Hi @magesh83 I spent a couple hours trying to work around this shortly after my last message, but unfortunately I made no progress. I had to pause work on this and was hopeful by the time I got back to it the LocalAI team might have resolved. The build instructions right now for mac/M1/M2 are sparse and it's not clear if this is not building on all current M1/M2 macs with Sonoma, or if it is something peculiar with build environments.

@Aisuko Can you provide some guidance here?

A fully working Apple ARM64 build process would be highly appreciated @Aisuko :-). I am running into the same issues.