PINTO0309 / PINTO_model_zoo

A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8), EdgeTPU, CoreML.
https://qiita.com/PINTO
MIT License
3.6k stars 572 forks source link

Mediapipe facemesh tfjs #36

Closed gauravgola96 closed 3 years ago

gauravgola96 commented 4 years ago

Any plans for optimization for mobile browsers. Or tfjs model provided in the repo will work.

PINTO0309 commented 4 years ago

Does the already committed model not fit your needs? https://github.com/PINTO0309/PINTO_model_zoo#4-2d3d-face-detection https://github.com/PINTO0309/PINTO_model_zoo/tree/master/032_FaceMesh/08_tfjs

gauravgola96 commented 4 years ago

I tried it for mobile browser for medium/low level devices with decent gpu but got only around 5-6Fps (Webgl backend) . Can you tell what optimization you did in this tfjs model and tfjs facemesh has recently updated with iris support which droped its performance by 5-7fps also.

PINTO0309 commented 4 years ago

I don't know what kind of mobile device you're using, but I ran it on Google Chrome on my Pixel 4a and it performed at around 10FPS. I think it's a GPU performance issue. ezgif com-video-to-gif (1)

gauravgola96 commented 4 years ago

I am testing on https://www.devicespecifications.com/en/model/8d2f4cea Getting 5fps.

PINTO0309 commented 4 years ago

Hmmm. There doesn't seem to be any significant difference in the performance of your device and mine. Have you tried the following demo? https://terryky.github.io/tfjs_webgl_app/facemesh

gauravgola96 commented 4 years ago

Yes, i tried this only. Getting 5FPS.

PINTO0309 commented 4 years ago

I generated and committed a TFJS model of Float16, hoping that the GPU would be used effectively. https://github.com/PINTO0309/PINTO_model_zoo/tree/master/032_FaceMesh/08_tfjs

@terryky Does your FaceMesh example program use the Float32 model? Have you tried the Float16 model and have you ever tried it? I don't know if it will improve my performance.

gauravgola96 commented 4 years ago

From the network calls, it looks like this demo https://terryky.github.io/tfjs_webgl_app/facemesh is using https://storage.googleapis.com/tfhub-tfjs-modules/mediapipe/tfjs-model/facemesh/1/default/1/model.json It is not using your quantized model.

terryky commented 4 years ago

Yes, the facemesh sample app simply uses mediapipe original tfjs model. By default, it runs tfjs with webgl-backend. The performance may increase if it use wasm-backend. (depends on devices)

@gauravgola96 Have you tried wasm-backend instead of webgl-backend ? You can use the wasm backend just by enabling the following line: https://github.com/terryky/tfjs_webgl_app/blob/fc404c39ba9a6f834f18a40f546564e94b8fbc69/facemesh/webgl_main.js#L5

gauravgola96 commented 4 years ago

@terryky @PINTO0309 Since you have used the original mediapipe tfjs model I tested https://storage.googleapis.com/tfjs-models/demos/facemesh/index.html which is official demo with predicted iris off. For webgl backend : 5-6 FPS wasm backend : 4-5 FPS

Can I use the quantized model (float 16 ) version in your demo project somehow? Also, do I have to also use Blazeface quantized model while using the quantized face mesh model?

gauravgola96 commented 4 years ago

However, tried to load your quantized model in facemesh but got this error.

image (5)

terryky commented 4 years ago

I suspect that using the fp16 model will not improve performance because I have tried fp16 model in the tensorflow lite environment but I did not see distinguish performance improvement.

tflite port is here: https://github.com/terryky/tflite_gles_app/tree/master/gl2facemesh