Open MC-Bourguiba opened 1 year ago
Same error
Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!
_but.... I can also be funny or helpful :smilecat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.
Don't engage in conversation with me, I don't support (yet) replying!
Based on the error message, it seems that there are duplicate symbols in the ggml.o
and ggml.a
files generated by the go-ggml-transformers
and go-llama
packages respectively. This could be caused by different versions of the same library being included in the build process.
To troubleshoot this issue, you can try the following steps:
go-ggml-transformers
and go-llama
packages installed on your system and ensure that they are compatible with each other. You can use the go list
command to check the versions of the packages.ggml.o
file from the build process by adding the following line to the LD_FLAGS
variable:
CGO_LDFLAGS="${CGO_LDFLAGS} -Wl,-rpath=/usr/local/lib"
This will tell the linker to only use the libraries installed in /usr/local/lib
and exclude any other libraries that might be included in the build process.
LocalAI
package from source using the go get
command instead of the go build
command. This will ensure that the latest version of the packages are installed and may help to resolve any compatibility issues.
I hope this helps! Let me know if you have any further questions or need more assistance.Sources:
Same issue
I can not build it in M2 Macbookpro with arm64 too
HI, guys, Still cannot build succeed? Here is the document. I hope it will helpful. https://localai.io/basics/build/index.html
Hi. I'm on an M2 Macbook Pro and have also been unable to get a completely successful build going. I think there are a few different issues going on with creating the archive, and the majority seems to be with cmake mistakenly identifying the architecture as x86_64 and not arm64. I'm not sure if this is your specific case though, but do suggest trying another route.
There are a bunch of documented workarounds with flags and options to set, but found the solution was to not use cmake via brew.
I uninstalled cmake and installed directly via https://cmake.org/download/. To access CLI tools just make sure that your path is updated:
export PATH="/Applications/CMake.app/Contents/bin:$PATH"
Running a clean make and then with:
LLAMA_METAL=1 make
This resulted in the majority of repos compiling successfully. I'm currently wresting with grpc compiling successfully with llama2.cpp and dealing with some missing includes. Otherwise using the official version of cmake and go (as opposed to the brew versions) appears to working.
If you are failed on building grpc
on macOS. Checking here https://github.com/mudler/LocalAI/issues/1197#issuecomment-1779573484
Great, thanks @Aisuko ! I was trying some other env variables to append the incldue path but missed these.
Adding the include dirs for the brew installed protobuf, grpc and abseil appears to have got things much further. e.g.
export CPLUS_INCLUDE_PATH=/usr/local/opt/protobuf@21/include:/usr/local/opt/grpc/include/ export C_INCLUDE_PATH=/usr/local/opt/protobuf@21/include:/usr/local/opt/grpc/include/:/usr/local/opt/abseil/include/
However, on linking am getting:
ld: symbol(s) not found for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[4]: [bin/grpc-server] Error 1 make[3]: [examples/grpc-server/CMakeFiles/grpc-server.dir/all] Error 2 make[2]: [all] Error 2 make[1]: [grpc-server] Error 2
I realize this are likely issues with llama2 or grpc itself, so appreciate any advice. I am curious about the challenges here -are these all likely due to something specific with M1 or M2 macs?
@ericblue Did you fix this issue, facing the same issue in Macbook M2. Unable to get past it.
Hi @magesh83 I spent a couple hours trying to work around this shortly after my last message, but unfortunately I made no progress. I had to pause work on this and was hopeful by the time I got back to it the LocalAI team might have resolved. The build instructions right now for mac/M1/M2 are sparse and it's not clear if this is not building on all current M1/M2 macs with Sonoma, or if it is something peculiar with build environments.
@Aisuko Can you provide some guidance here?
I've been able to clean build using the makefile in a single shot on a MacBook Air M2 running Fedora Asahi Remix and flag to build the GRPC backend. The trickiest part was getting the OpenCV header includes linked when enabling stable diffusion. Would like to see metal support added for aarch64 machines. 🙏
Hi @magesh83 I spent a couple hours trying to work around this shortly after my last message, but unfortunately I made no progress. I had to pause work on this and was hopeful by the time I got back to it the LocalAI team might have resolved. The build instructions right now for mac/M1/M2 are sparse and it's not clear if this is not building on all current M1/M2 macs with Sonoma, or if it is something peculiar with build environments.
@Aisuko Can you provide some guidance here?
A fully working Apple ARM64 build process would be highly appreciated @Aisuko :-). I am running into the same issues.
LocalAI version: commit 3829aba869f8925dde7a1c9f280a4718dda3a18c/ docker 6102e12c4df1
Environment, CPU architecture, OS, and Version: MacBook Air M2, Ventura 13.4
Describe the bug Unable to build either locally or using docker. Both methods yield the same error.
To Reproduce
Expected behavior
Logs
Additional context Please let me know if you need more context, this is my first ever issue on github. Thank you!