google-ai-edge / mediapipe-samples

Apache License 2.0
1.5k stars 385 forks source link

How to get LLM model performance? #412

Open KYUNGSOO-LEE opened 3 months ago

KYUNGSOO-LEE commented 3 months ago

Hi

I would like to get the performance of Gemma model on-device(android) with medoapipe.

I read blog about llm model with mediapipe. (https://developers.googleblog.com/en/large-language-models-on-device-with-mediapipe-and-tensorflow-lite/)

How to get LLM model performance(e.g. TTFT TPOT)?

I installed LLM inference example. But I can not find any logs about performance.

AkulRT commented 1 month ago

I've been trying to look for the same thing. Would love to see something from the devs regarding being able to find prefill token speed and decode token speed by ourselves.

@KYUNGSOO-LEE as a crude substitute in the meantime, I am using .sizeInTokens() to find the input prompt token size and divide that by time for inference. I am calculating inference time using timeSource.markNow() before and after .generateResponse(). Maybe this can be a rough metric for you too.