Closed gauravgola96 closed 3 years ago
Does the already committed model not fit your needs? https://github.com/PINTO0309/PINTO_model_zoo#4-2d3d-face-detection https://github.com/PINTO0309/PINTO_model_zoo/tree/master/032_FaceMesh/08_tfjs
I tried it for mobile browser for medium/low level devices with decent gpu but got only around 5-6Fps (Webgl backend) . Can you tell what optimization you did in this tfjs model and tfjs facemesh has recently updated with iris support which droped its performance by 5-7fps also.
I don't know what kind of mobile device you're using, but I ran it on Google Chrome on my Pixel 4a and it performed at around 10FPS. I think it's a GPU performance issue.
I am testing on https://www.devicespecifications.com/en/model/8d2f4cea Getting 5fps.
Hmmm. There doesn't seem to be any significant difference in the performance of your device and mine. Have you tried the following demo? https://terryky.github.io/tfjs_webgl_app/facemesh
Yes, i tried this only. Getting 5FPS.
I generated and committed a TFJS model of Float16, hoping that the GPU would be used effectively. https://github.com/PINTO0309/PINTO_model_zoo/tree/master/032_FaceMesh/08_tfjs
@terryky Does your FaceMesh example program use the Float32 model? Have you tried the Float16 model and have you ever tried it? I don't know if it will improve my performance.
From the network calls, it looks like this demo https://terryky.github.io/tfjs_webgl_app/facemesh
is using https://storage.googleapis.com/tfhub-tfjs-modules/mediapipe/tfjs-model/facemesh/1/default/1/model.json
It is not using your quantized model.
Yes, the facemesh sample app simply uses mediapipe original tfjs model. By default, it runs tfjs with webgl-backend. The performance may increase if it use wasm-backend. (depends on devices)
@gauravgola96 Have you tried wasm-backend instead of webgl-backend ? You can use the wasm backend just by enabling the following line: https://github.com/terryky/tfjs_webgl_app/blob/fc404c39ba9a6f834f18a40f546564e94b8fbc69/facemesh/webgl_main.js#L5
@terryky @PINTO0309 Since you have used the original mediapipe tfjs model I tested https://storage.googleapis.com/tfjs-models/demos/facemesh/index.html which is official demo with predicted iris off. For webgl backend : 5-6 FPS wasm backend : 4-5 FPS
Can I use the quantized model (float 16 ) version in your demo project somehow? Also, do I have to also use Blazeface quantized model while using the quantized face mesh model?
However, tried to load your quantized model in facemesh but got this error.
I suspect that using the fp16 model will not improve performance because I have tried fp16 model in the tensorflow lite environment but I did not see distinguish performance improvement.
tflite port is here: https://github.com/terryky/tflite_gles_app/tree/master/gl2facemesh
Any plans for optimization for mobile browsers. Or tfjs model provided in the repo will work.