buddy-compiler / buddy-benchmark

Benchmark Framework for Buddy Projects
Apache License 2.0
46 stars 39 forks source link

[DL] Add efficientnet-quantized benchmark. #66

Closed xlinsist closed 1 year ago

xlinsist commented 1 year ago

Add efficientnet-quantized benchmark. The weighted model efficientnet.mlir is merely 10MB (no LFS is needed) and completely quantized (without floating-point operations).

How to use

Just follow the instruction of "Deep Learning Benchmark" in README.md. The configuration is the same as other DL benchmark like ResNet-18.

About the quantized model

The model is generated from the EfficientNet-EdgeTpu(S)-quant model in https://coral.ai/models/image-classification/ using iree-import-tflite. In order to completely eliminate floating-point operations, the Softmax part is taken out of the original model and re-implemented in the cpp file.