netdur / llama_cpp_dart

dart binding for llama.cpp
MIT License
169 stars 22 forks source link

Android: Failed to lookup symbol 'llama_backend_init': undefined symbol: llama_backend_init #27

Open riverzhou opened 9 months ago

riverzhou commented 9 months ago

Version: llama_cpp_dart 0.0.6 llama.cpp tag: b2277

logcat:

02-28 00:21:29.079  5839  8926 E flutter : [ERROR:flutter/runtime/dart_isolate.cc(1107)] Unhandled exception:
02-28 00:21:29.079  5839  8926 E flutter : Invalid argument(s): Failed to lookup symbol 'llama_backend_init': undefined symbol: llama_backend_init
02-28 00:21:29.079  5839  8926 E flutter : #0      DynamicLibrary.lookup (dart:ffi-patch/ffi_dynamic_library_patch.dart:33)
02-28 00:21:29.079  5839  8926 E flutter : #1      llama_cpp._llama_backend_initPtr (package:llama_cpp_dart/src/llama_cpp.dart:10187)
02-28 00:21:29.079  5839  8926 E flutter : #2      llama_cpp._llama_backend_init (package:llama_cpp_dart/src/llama_cpp.dart)
02-28 00:21:29.079  5839  8926 E flutter : #3      llama_cpp.llama_backend_init (package:llama_cpp_dart/src/llama_cpp.dart)
02-28 00:21:29.079  5839  8926 E flutter : #4      new Llama (package:llama_cpp_dart/src/llama.dart:74)
02-28 00:21:29.079  5839  8926 E flutter : #5      LlamaProcessor._modelIsolateEntryPoint.<anonymous closure> (package:llama_cpp_dart/src/llama_processor.dart:96)
02-28 00:21:29.079  5839  8926 E flutter : #6      _RootZone.runUnaryGuarded (dart:async/zone.dart:1594)
02-28 00:21:29.079  5839  8926 E flutter : #7      _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:339)
02-28 00:21:29.079  5839  8926 E flutter : #8      _BufferingStreamSubscription._add (dart:async/stream_impl.dart:271)
02-28 00:21:29.079  5839  8926 E flutter : #9      _SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:784)
02-28 00:21:29.079  5839  8926 E flutter : #10     _StreamController._add (dart:async/stream_controller.dart:658)
02-28 00:21:29.079  5839  8926 E flutter : #11     _StreamController.add (dart:async/stream_controller.dart:606)
02-28 00:21:29.079  5839  8926 E flutter : #12     _RawReceivePort._handleMessage (dart:isolate-patch/isolate_patch.dart:184)
netdur commented 9 months ago

@riverzhou the last llama.cpp has changed llama_backend_init to have bool argument, the version 0.0.7 is updated to match it

riverzhou commented 9 months ago

@riverzhou the last llama.cpp has changed llama_backend_init to have bool argument, the version 0.0.7 is updated to match it

They removed numa argument for llama_backend_init at Feb 16. In my test, b2277 do not have this argument.

commit f486f6e1e5e9d01603d9325ab3e05f1edb362a95
Author: bmwl <brian.marshall@tolko.com>
Date:   Fri Feb 16 01:31:07 2024 -0800

    ggml : add numa options (#5377)

diff --git a/llama.h b/llama.h
index 4a26bd61..f4ec6ea6 100644
--- a/llama.h
+++ b/llama.h
@@ -312,7 +312,10 @@ extern "C" {
     // Initialize the llama + ggml backend
     // If numa is true, use NUMA optimizations
     // Call once at the start of the program
-    LLAMA_API void llama_backend_init(bool numa);
+    LLAMA_API void llama_backend_init(void);
+
+    //optional:
+    LLAMA_API void llama_numa_init(enum ggml_numa_strategy numa);

     // Call once at the end of the program - currently only used for MPI
riverzhou commented 9 months ago

@riverzhou the last llama.cpp has changed llama_backend_init to have bool argument, the version 0.0.7 is updated to match it

They removed numa argument for llama_backend_init at Feb 16. In my test, b2277 do not have this argument.

commit f486f6e1e5e9d01603d9325ab3e05f1edb362a95
Author: bmwl <brian.marshall@tolko.com>
Date:   Fri Feb 16 01:31:07 2024 -0800

    ggml : add numa options (#5377)

diff --git a/llama.h b/llama.h
index 4a26bd61..f4ec6ea6 100644
--- a/llama.h
+++ b/llama.h
@@ -312,7 +312,10 @@ extern "C" {
     // Initialize the llama + ggml backend
     // If numa is true, use NUMA optimizations
     // Call once at the start of the program
-    LLAMA_API void llama_backend_init(bool numa);
+    LLAMA_API void llama_backend_init(void);
+
+    //optional:
+    LLAMA_API void llama_numa_init(enum ggml_numa_strategy numa);

     // Call once at the end of the program - currently only used for MPI

I checked your source code. Both in 0.0.6 and 0.0.7, they have numa argument. So they can not work on upstream llama.cpp after Feb 16.

  void llama_backend_init(
    bool numa,
  ) {
    return _llama_backend_init(
      numa,
    );
  }
netdur commented 9 months ago

that weird, I will double check

netdur commented 9 months ago

@riverzhou you are correct, turns out my git pull did not update llama.cpp code, I had to hard reset, please try the last update

riverzhou commented 8 months ago

@riverzhou you are correct, turns out my git pull did not update llama.cpp code, I had to hard reset, please try the last update

Great! Thanks!