Recently I have upgraded my armnn and compute library to v22.02 and as per v22.02 release note upgraded tensorflow to use v2.5.0.
But when I executed my fp32 TF-lite model (using ExecutionNetwork framework) I see that there is an increase in processing time with or without applying task affinity. I am using CpuAcc backend and running on a device using Android 12. ARMNN libraries are built using Android NDK version r20b.
Hi,
Recently I have upgraded my armnn and compute library to v22.02 and as per v22.02 release note upgraded tensorflow to use v2.5.0. But when I executed my fp32 TF-lite model (using ExecutionNetwork framework) I see that there is an increase in processing time with or without applying task affinity. I am using CpuAcc backend and running on a device using Android 12. ARMNN libraries are built using Android NDK version r20b.
<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns="http://www.w3.org/TR/REC-html40">
| v21.05 | v22.02 | Increase % -- | -- | -- | -- Small | 878.29 | 1295.56 | 47.50936479 Medium | 337.87 | 390.9 | 15.6953858 Big | 234.64 | 283.72 | 20.91714968 No Affinity | 245.72 | 317.33 | 29.14292691