DenisovAV / flutter_gemma

The Flutter plugin allows running the Gemma AI model locally on a device from a Flutter application.
MIT License
55 stars 17 forks source link

<uses-native-library> missing in Android LLM Inference Guide #3

Closed ahndwon closed 6 months ago

ahndwon commented 6 months ago

I was not able to run the example after I added the model.bin to my device.

I followed the details on the fix and added

in AndroidManifest.xml. And then I was able to run the app and use gemma. It should be mentioned in the Readme.md and added to the example.
DenisovAV commented 6 months ago

Thank you for your comment! Interestingly, on my device, everything works without this fix. I'll try to reproduce it too, could you tell, which device and which Android version you have, and with which type of build you reproduce it?

ahndwon commented 6 months ago

Okay below are my device details.

Google Pixel 7 Android 14 BuildNumber : AP1A.240405.002 model : Tensorflow lite gemma-2b-it-gpu-int4

I was able to reproduce by removing tags. iOS and web works fine.

DenisovAV commented 6 months ago

I reproduced it. Yes, you're right, this needs to be added to the manifest if you're using a gpu model, for cpu, it's not necessary, as everything works without it. I tried with cpu only before.

Will add this to the manual today, thank you

DenisovAV commented 6 months ago

Done, version 0.1.4