edvardHua / PoseEstimationForMobile

:dancer: Real-time single person pose estimation for Android and iOS.
Apache License 2.0
1k stars 268 forks source link

uint8 quantization #117

Open Jove125 opened 4 years ago

Jove125 commented 4 years ago

Hello!

Has anyone tried uint8 quantization of this model?

I tried to use this script: tflite_convert --graph_def_file=./model.pb --output_file=./pose-quant.tflite --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --inference_type=QUANTIZED_UINT8 --input_shape="1,96,96,3" --input_array=image --output_array=Convolutional_Pose_Machine/stage_5_out --default_ranges_min=0 --default_ranges_max=6 --mean=0 --std_dev=1

After that I changed android mobile application to work with quantized model (use byte input and output instead float i/o).

Performance has grown by about 1.5-2 times, but points are very unstable (jump back and forth). It seems to me, that main problem is range of output data. It's byte now and should be in range 0-255, but in fact is in range 0~30. Nearest points (heatMapArray) have the same meaning and difficult to choose the most correct - that's why they are "jump".

Anybody have idea how to fix it? - Change range to 0-255, change GaussianBlur parameters, quantize somehow differently or something else?

lzcchl commented 3 years ago

hello, did you solved it? I face the same problem