The training set is the VOC format data set generated by using the cleaned widerface labels provided by Retinaface in conjunction with the widerface dataset.
Input tensor is 1 x 3 x height x width with mean values 127, 127, 127 and scale factor 1.0 / 128. Input image have to be previously converted to RGB format and resized to 320 x 240 pixels for version-RFB-320 model (or 640 x 480 for version-RFB-640 model).
Preprocessing
Given a path image_path to the image you would like to score:
The model outputs two arrays (1 x 4420 x 2) and (1 x 4420 x 4) of scores and boxes.
Postprocessing
In postprocessing, threshold filtration and non-max suppression are applied to the scores and boxes arrays.
Quantization
version-RFB-320-int8 is obtained by quantizing fp32 version-RFB-320 model. We use Intel® Neural Compressor with onnxruntime backend to perform quantization. View the instructions to understand how to use Intel® Neural Compressor for quantization.
Convert opset version to 12 for more quantization capability.
import onnx
from onnx import version_converter
model = onnx.load('version-RFB-320.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'version-RFB-320-12.onnx')
Model quantize
cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/ultraface/quantization/ptq_static
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/data \
--output_model=path/to/save
Ultra-lightweight face detection model
Description
This model is a lightweight facedetection model designed for edge computing devices.
Model
Dataset
The training set is the VOC format data set generated by using the cleaned widerface labels provided by Retinaface in conjunction with the widerface dataset.
Source
You can find the source code here.
Demo
Run demo.py python scripts example.
Inference
Input
Input tensor is
1 x 3 x height x width
with mean values127, 127, 127
and scale factor1.0 / 128
. Input image have to be previously converted toRGB
format and resized to320 x 240
pixels for version-RFB-320 model (or640 x 480
for version-RFB-640 model).Preprocessing
Given a path
image_path
to the image you would like to score:Output
The model outputs two arrays
(1 x 4420 x 2)
and(1 x 4420 x 4)
of scores and boxes.Postprocessing
In postprocessing, threshold filtration and non-max suppression are applied to the scores and boxes arrays.
Quantization
version-RFB-320-int8 is obtained by quantizing fp32 version-RFB-320 model. We use Intel® Neural Compressor with onnxruntime backend to perform quantization. View the instructions to understand how to use Intel® Neural Compressor for quantization.
Prepare Model
Download model from ONNX Model Zoo.
Convert opset version to 12 for more quantization capability.
Model quantize
Contributors
License
MIT