Closed gagangayari closed 4 weeks ago
I got the same result
is there any response?
you did it with the gpu model, not the cpu i got the same problem
Faced the same, switching to gpu model from cpu model worked for me. Trying using gpu model instead.
After switching to the gemma-1.1-2b-it-gpu-int4
model to get a cleaner reply, I got the following error on initializing the inference session:
com.google.mediapipe.framework.MediaPipeException: internal: Failed to initialize session: %sCan not open OpenCL library on this device - undefined symbol: clSetPerfHintQCOM
Device: Samsung F14 5G (GPU Renderer: Mali-G68)
The guides do not mention using uses-native-library
tags in AndroidManifest.xml
for OpenCL libraries. These tags are present in the AndroidManifest.xml
of the sample app. Adding them solves the problem on the above device configuration.
<uses-native-library
android:name="libOpenCL.so"
android:required="false"/>
<uses-native-library android:name="libOpenCL-car.so" android:required="false"/>
<uses-native-library android:name="libOpenCL-pixel.so" android:required="false"/>
A new version of MediaPipe recently went out which should hopefully fix this issue as well.
I have been trying llm_inference on android (https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples/llm_inference/android). While the model performs perfectly in the emulator, it starts generating junk outputs when I run it on an actual phone.
The model is downloaded from https://www.kaggle.com/models/google/gemma/tfLite/ , as provided in the docs.
Phone Configurations Android version : 13 RAM : 6GB Model : Samsung A23