Closed lusinlu closed 1 month ago
Hi @lusinlu, thanks for reporting the issue. Can you let us know what hardware/environment you are using? Such as are you using a real device or an emulator? For either case, can you give us the specifications, for example: Pixel 8 Pro API 34 (Android 14.0 "UpsideDownCake") | arm 64. The more information, the better. Thanks.
Marking this issue as stale since it has been open for 7 days with no activity. This issue will be closed if no further activity occurs.
This issue was closed because it has been inactive for 14 days. Please post a new issue if you need further assistance. Thanks!
Description of the bug:
Hi, we are trying to run pytorch model on android and following examples of image segmentation from google_ai_edge. We noticed that when running with the default model in the example, which is deeplab v3 the inference is ~40ms, but when we are using pytorch model (with ~2mln parameters, which is significantly smaller than deeplab) converted to tflite, the inference is ~200ms. For converting the model, we are using the simple code:
What possibly can be the issue here.
Actual vs expected behavior:
No response
Any other information you'd like to share?
No response