Closed fbellomi closed 3 months ago
Hey @fbellomi I just upgraded the binding to the latest llama.cpp version (java-llama.cpp version 3.0.2). Can you please check if the problem persists? If it does, I'll have a closer look.
@kherud, thanks for your quick reply
I tried with 3.0.2, but it seems to fail to load the binary library
/de/kherud/llama/Mac/x86_64
'ggml-metal.metal' not found
Extracted 'libllama.dylib' to '/var/folders/h0/xch4xg717wq862s2hppsfyc00000gn/T/libllama.dylib'
/private/var/folders/h0/xch4xg717wq862s2hppsfyc00000gn/T/libllama.dylib: dlopen(/private/var/folders/h0/xch4xg717wq862s2hppsfyc00000gn/T/libllama.dylib, 0x0001): tried: '/private/var/folders/h0/xch4xg717wq862s2hppsfyc00000gn/T/libllama.dylib' (mach-o file, but is an incompatible architecture (have (arm64), need (x86_64h)))
Failed to load native library: /var/folders/h0/xch4xg717wq862s2hppsfyc00000gn/T/libllama.dylib. osinfo: Mac/x86_64
Exception in thread "main" java.lang.UnsatisfiedLinkError: No native library found for os.name=Mac, os.arch=x86_64, paths=[/de/kherud/llama/Mac/x86_64:/Users/francesco/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]
at de.kherud.llama.LlamaLoader.loadNativeLibrary(LlamaLoader.java:158)
at de.kherud.llama.LlamaLoader.initialize(LlamaLoader.java:65)
at de.kherud.llama.LlamaModel.<clinit>(LlamaModel.java:27)
at com.creactives.llm.UNSPSCEmbeddingsJLL.main(UNSPSCEmbeddingsJLL.java:40)
It seems to correctly recognize the x66_64 architecture, but it does not like the extracted library
I re-checked with 3.0.1 and it keeps failing like the comment before, so after it correctly loaded the library; so this issue appears to be specific to 3.0.2. I also tried to reset the Gradle cache and re-download the lib.
I checked the downloaded jar on my file system, and it contains the binaries in /de/kherud/llama/Mac/x86_64 (I don't know how to check if they are well-formed)
Thanks, Francesco
Thanks for the feedback, I'll look into it later today.
Hey @fbellomi sorry for the late reply. I think I found the problem: The GitHub actions runner macos-latest
changed from x86_64
to arm64
at some point in time. The build workflow of this repository still used macos-latest
in the x86_64
job, though. That's why you got the UnsatisfiedLinkError. The library was wrongfully built for arm64
, but then moved to the x86_64
Java resources directory. I hope everything works for you now with version 3.1.0
.
Hi, thanks for your support
upgraded to 3.1.0
Still no luck with loading the native lib, but got a different error
/de/kherud/llama/Mac/x86_64
'ggml-metal.metal' not found
Extracted 'libllama.dylib' to '/var/folders/h0/xch4xg717wq862s2hppsfyc00000gn/T/libllama.dylib'
/private/var/folders/h0/xch4xg717wq862s2hppsfyc00000gn/T/libllama.dylib: dlopen(/private/var/folders/h0/xch4xg717wq862s2hppsfyc00000gn/T/libllama.dylib, 0x0001): Symbol not found: (_cblas_sgemm$NEWLAPACK$ILP64)
Referenced from: '/private/var/folders/h0/xch4xg717wq862s2hppsfyc00000gn/T/libllama.dylib'
Expected in: '/System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate'
Failed to load native library: /var/folders/h0/xch4xg717wq862s2hppsfyc00000gn/T/libllama.dylib. osinfo: Mac/x86_64
Exception in thread "main" java.lang.UnsatisfiedLinkError: No native library found for os.name=Mac, os.arch=x86_64, paths=[/de/kherud/llama/Mac/x86_64:/Users/francesco/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:.]
at de.kherud.llama.LlamaLoader.loadNativeLibrary(LlamaLoader.java:158)
at de.kherud.llama.LlamaLoader.initialize(LlamaLoader.java:65)
at de.kherud.llama.LlamaModel.<clinit>(LlamaModel.java:22)
at com.creactives.llm.UNSPSCEmbeddingsJLL.main(UNSPSCEmbeddingsJLL.java:40)
thanks, Francesco
I've tested with version 3.2.1 and it works as expected.
I'm closing this issue as resolved
Hello,
I'm trying to use snowflake-arctic-embed-l for embedding,
I'm using https://huggingface.co/ChristianAzinn/snowflake-arctic-embed-l-gguf
I'm on MacOS x86-64 (CPU, no CUDA), using directly the maven dependency (no GPU setup)
It fails with this message below,
I'm not sure if this is the problem, but it seems to pick up the AMD Radeon Pro 575X (the graphic accelerator) and try to use it as a GPU, and I don't know how to disable this
as a quick test, I tried to use the last version of ollama (which uses a more recent build of llama.cpp) and it works fine on my system, but I'm not really sure if the issue is the version of llama.cpp
Thanks for any help, and thanks for your efforts with java-llama.cpp
Francesco