How many operations are supported under the framework of PocketFlow? I didn't find any docs listing the ops available.
By the way, the acceleration ratio according to the performance of mobilenet V1 & V2 is not so astonishing, is there any more detailed performance of other deconv models?
Many thanks.
PocketFlow is a training framework for generating compressed models, rather than an inference library for deploying compressed model. We use TensorFlow Lite for deployment on mobile devices.
By "other deconv models", which models do you mean exactly?
So what if i get some customer OPs out of TF lite? Do i need to design the quantize rule using Gemm for mobile device acceleration OR PocketFlow has done all the dirty work?
2、Say shufflenet or inception
PocketFlow does not include support for customized OPs in TF-Lite. If your OP is not supported by TF-Lite, then you need to provide your own implementation.
We will add some benchmark results for these two networks in the next few weeks.
How many operations are supported under the framework of PocketFlow? I didn't find any docs listing the ops available. By the way, the acceleration ratio according to the performance of mobilenet V1 & V2 is not so astonishing, is there any more detailed performance of other deconv models? Many thanks.