Hi, i see the benchmark from README. I have some questions.
What's the platform used for inference time testing?
It's there any neon acceleration for depthwise conv in mobilenet?
There is a great difference between theoretical acceleration and actual acceleration. Although the amount of computation of mobilenet is twice as much as that of condensenet, I still want to know the speed difference after specific optimization.
Hi, i see the benchmark from
README
. I have some questions. What's the platform used for inference time testing? It's there any neon acceleration for depthwise conv in mobilenet? There is a great difference between theoretical acceleration and actual acceleration. Although the amount of computation of mobilenet is twice as much as that of condensenet, I still want to know the speed difference after specific optimization.Inference time on ARM platform