go-skynet / go-llama.cpp

LLama.cpp golang bindings
MIT License
667 stars 80 forks source link

Running the example fails, after following the steps (on Readme.md) to build. #295

Open pnsvk opened 10 months ago

pnsvk commented 10 months ago
MAC-CBBH4ACVpp:go-llama.cpp pnsvk$ LIBRARY_PATH=$PWD C_INCLUDE_PATH=$PWD go run ./examples -m /Users/pnsvk/Downloads/mistral-7b-v0.1.Q4_K_M.gguf -t 14
\# github.com/go-skynet/go-llama.cpp
binding.cpp:333:67: warning: format specifies type 'size_t' (aka 'unsigned long') but the argument has type 'int' [-Wformat]
binding.cpp:809:5: warning: deleting pointer to incomplete type 'llama_model' may cause undefined behavior [-Wdelete-incomplete]
./llama.cpp/llama.h:60:12: note: forward declaration of 'llama_model'
\# github.com/go-skynet/go-llama.cpp/examples
/usr/local/go/pkg/tool/darwin_amd64/link: running clang++ failed: exit status 1
ld: warning: -no_pie is deprecated when targeting new OS versions
Undefined symbols for architecture x86_64:
  "_ggml_metal_add_buffer", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_free", referenced from:
      llama_context::~llama_context() in libbinding.a(llama.o)
  "_ggml_metal_get_concur_list", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_graph_compute", referenced from:
      llama_eval_internal(llama_context&, int const*, float const*, int, int, int, char const*) in libbinding.a(llama.o)
  "_ggml_metal_graph_find_concurrency", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_host_free", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
      llm_load_tensors(llama_model_loader&, llama_model&, int, int, int, float const*, bool, bool, ggml_type, bool, void (*)(float, void*), void*) in libbinding.a(llama.o)
      llama_model::~llama_model() in libbinding.a(llama.o)
      llama_context::~llama_context() in libbinding.a(llama.o)
  "_ggml_metal_host_malloc", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
      llm_load_tensors(llama_model_loader&, llama_model&, int, int, int, float const*, bool, bool, ggml_type, bool, void (*)(float, void*), void*) in libbinding.a(llama.o)
  "_ggml_metal_if_optimized", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_init", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_log_set_callback", referenced from:
      _llama_new_context_with_model in libbinding.a(llama.o)
  "_ggml_metal_set_n_cb", referenced from:
      llama_eval_internal(llama_context&, int const*, float const*, int, int, int, char const*) in libbinding.a(llama.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Please can you help check with this error ?

Following are the two steps I followed, to run it (these are what were described on the readme)

MAC-CBBH4ACVpp:code-checkouts pnsvk$ git clone --recurse-submodules https://github.com/go-skynet/go-llama.cpp

MAC-CBBH4ACVpp:code-checkouts pnsvk$ cd go-llama.cpp
MAC-CBBH4ACVpp:go-llama.cpp pnsvk$ make libbinding.a

MAC-CBBH4ACVpp:go-llama.cpp pnsvk$ LIBRARY_PATH=$PWD C_INCLUDE_PATH=$PWD go run ./examples -m "/model/path/here" -t 14
macie commented 10 months ago

@pnsvk I see that you are using quantized Mistral model, which is quite new. I also cannot run the example with Mistral model with cryptic errors.

llama.cpp is released frequently, but go-llama.cpp cannot keep up. Maybe this is the source of the problem?