google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
27.64k stars 5.17k forks source link

The LlmInference model is not closing #5740

Open moon5bal opened 1 day ago

moon5bal commented 1 day ago

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

Yes

OS Platform and Distribution

Android 14

Mobile device if the issue happens on mobile device

QCOM ADP 8155

Browser and version if the issue happens on browser

No response

Programming Language and version

Kotlin/Java

MediaPipe version

0.10.18

Bazel version

No response

Solution

llmInference

Android Studio, NDK, SDK versions (if issue is related to building in Android environment)

Android Studio Koala | 2024.1.1

Xcode & Tulsi version (if issue is related to building for iOS)

No response

Describe the actual behavior

I call close function in LlmInference, but this is not close immediately

Describe the expected behaviour

Closes and cleans up the LlmInference Model immediately

Standalone code/steps you may have used to try to get what you need

inferenceModel.close()

Other info / Complete Logs

I'm modifying the LlmInference example app of Mediapipe for testing. 
I needed to interrupt the llm inference, so I called close in LlmInference.java, but it doesn't terminate immediately. 
When using the Gemma 2B model, it terminated right away, but with the Gemma 7B model, it takes more than 2-3 minutes to stop. 
Are you planning to modify it so that larger models can also terminate immediately? 
If there are other methods, please let me know.
kuaashish commented 1 day ago

Hi @moon5bal,

Thank you for your observation. Could you please let us know which physical device you are using for this implementation? Additionally, could you try this on another physical device to check if the behavior is the same? This information will help us investigate the issue further and discuss potential fixes for Gemma 7B and other large models we plan to support in the future.

moon5bal commented 10 hours ago

Hi, @kuaashish I'm using Qualcomm ADP 8155 board. Below is board spec. https://www.lantronix.com/products/sa8155p-automotive-development-platform/#product-specifications

I only have the ADP8155 device right now, so I can't check it immediately. If I get other devices, I'll test it. Also, I think there are performance issues depending on the model, but it seems that while the Java side is cleaned up when calling LlmInference.close, the native side is not being cleaned up.