Add efficientnet-quantized benchmark. The weighted model efficientnet.mlir is merely 10MB (no LFS is needed) and completely quantized (without floating-point operations).
How to use
Just follow the instruction of "Deep Learning Benchmark" in README.md. The configuration is the same as other DL benchmark like ResNet-18.
About the quantized model
The model is generated from the EfficientNet-EdgeTpu(S)-quant model in https://coral.ai/models/image-classification/ using iree-import-tflite. In order to completely eliminate floating-point operations, the Softmax part is taken out of the original model and re-implemented in the cpp file.
Add efficientnet-quantized benchmark. The weighted model
efficientnet.mlir
is merely 10MB (no LFS is needed) and completely quantized (without floating-point operations).How to use
Just follow the instruction of "Deep Learning Benchmark" in README.md. The configuration is the same as other DL benchmark like ResNet-18.
About the quantized model
The model is generated from the EfficientNet-EdgeTpu(S)-quant model in https://coral.ai/models/image-classification/ using iree-import-tflite. In order to completely eliminate floating-point operations, the Softmax part is taken out of the original model and re-implemented in the cpp file.