610265158 / Peppa_Pig_Face_Landmark

A simple face detect and alignment method, which is easy and stable.
Apache License 2.0
525 stars 116 forks source link

Confuse about model inference time #8

Closed xiao-keeplearning closed 4 years ago

xiao-keeplearning commented 4 years ago

hi, dear Peppa man, I found you have listed those model inference time,

shufflenetv2_0.75 including tflite model, (time cost: mac i5-8279U@2.4GHz, tf2.0 5ms+, tflite 3.7ms+- model size 2.5M) but when I run the demo in my 1080/ cpu,they spent the almost same time.

In CPU(Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz): one iamge cost 0.016803 s facebox detect cost 0.0021691322326660156 in 1080: one iamge cost 0.011924 s facebox detect cost 0.001332998275756836

It seems to your face landmark model inference time is Shorter?

610265158 commented 4 years ago

hi, dear Peppa man, I found you have listed those model inference time,

shufflenetv2_0.75 including tflite model, (time cost: mac i5-8279U@2.4GHz, tf2.0 5ms+, tflite 3.7ms+- model size 2.5M) but when I run the demo in my 1080/ cpu,they spent the almost same time.

In CPU(Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz): one iamge cost 0.016803 s facebox detect cost 0.0021691322326660156 in 1080: one iamge cost 0.011924 s facebox detect cost 0.001332998275756836

It seems to your face landmark model inference time is Shorter?

facebox detect cost is not a common inference cost, some frame is skipped.

You should measure the time cost by average, during the inference.

By the way, small model benefits less for a GPU devices.

610265158 commented 4 years ago

You better measure the inference time Independently. And tflite should be faster, because static graph excutation and more optimisations. Thanks