DenisovAV / flutter_gemma

The Flutter plugin allows running the Gemma AI model locally on a device from a Flutter application.
MIT License
54 stars 16 forks source link

Is there any Problem while transfering file #11

Closed aditya-wappnet closed 1 month ago

aditya-wappnet commented 3 months ago

I/r_gemma_example(22300): Compiler allocated 6489KB to compile void android.view.ViewRootImpl.performTraversals() F/native (22300): F0000 00:00:1723190524.790431 22300 llm_inference_engine.cc:92] Failed to get LLM params: INVALID_ARGUMENT: LLM model file is null F/native (22300): terminating. F/native (22300): F0000 00:00:1723190524.790431 22300 llm_inference_engine.cc:92] Failed to get LLM params: INVALID_ARGUMENT: LLM model file is null F/native (22300): terminating. F/libc (22300): Fatal signal 6 (SIGABRT), code -1 (SI_QUEUE) in tid 22300 (r_gemma_example), pid 22300 (r_gemma_example) Process name is dev.flutterberlin.flutter_gemma_example, not key_process

DenisovAV commented 3 months ago

Hi! Did you upload the model following the instruction?

aditya-wappnet commented 3 months ago

yes @DenisovAV we have to transfer model.bin please change in documentation . adb push output_path /data/local/tmp/llm/model.bin instead of output_path we need to push model.bin

aditya-wappnet commented 3 months ago

@DenisovAV Also it hangs the app , app performace come down any solution

DenisovAV commented 1 month ago

I updated the manual, about uploading the model, please take a look