Open KYUNGSOO-LEE opened 3 months ago
I've been trying to look for the same thing. Would love to see something from the devs regarding being able to find prefill token speed and decode token speed by ourselves.
@KYUNGSOO-LEE as a crude substitute in the meantime, I am using .sizeInTokens() to find the input prompt token size and divide that by time for inference. I am calculating inference time using timeSource.markNow() before and after .generateResponse(). Maybe this can be a rough metric for you too.
Hi
I would like to get the performance of Gemma model on-device(android) with medoapipe.
I read blog about llm model with mediapipe. (https://developers.googleblog.com/en/large-language-models-on-device-with-mediapipe-and-tensorflow-lite/)
How to get LLM model performance(e.g. TTFT TPOT)?
I installed LLM inference example. But I can not find any logs about performance.