ruxailab / web-eye-tracker-front

MIT License
6 stars 24 forks source link

Improvement: research if it exists a newer and more improved version of the tensorflow.js model we use #47

Open KarinePistili opened 5 months ago

KarinePistili commented 5 months ago

Research if the tensorflow package for javascript has new updates for the model, and if we can use something more optimized. If positive, list the implementation requirements so we can decide to upgrade it or not.

sitamgithub-MSIT commented 5 months ago

@KarinePistili Does this issue relate to search for models for efficient gaze tracking. In my proposal, I mentioned the web compatibility of models and possible options from MediaPipe and TensorFlow. JS models from Kaggle Model Hub.

KarinePistili commented 5 months ago

Hi @sitamgithub-MSIT, this issue relates to the iris tracking model contained inside the web project that is used for callibration. Right now we use the face landmarks one, and would like to know if there is something more updated or precise to be used, as it has been integrated a while ago.

Check out a little bit about the model we have there: https://blog.tensorflow.org/2020/11/iris-landmark-tracking-in-browser-with-MediaPipe-and-TensorFlowJS.html

sitamgithub-MSIT commented 5 months ago

Hi @sitamgithub-MSIT, this issue relates to the iris tracking model contained inside the web project that is used for callibration. Right now we use the face landmarks one, and would like to know if there is something more updated or precise to be used, as it has been integrated a while ago.

Check out a little bit about the model we have there: https://blog.tensorflow.org/2020/11/iris-landmark-tracking-in-browser-with-MediaPipe-and-TensorFlowJS.html

Alright. Thank you for sharing this. When I was putting together my proposal, I did do some research. So I will post those details here as well as soon.

KarinePistili commented 5 months ago

Okay, thanks!

sitamgithub-MSIT commented 5 months ago

So, I did my early research on this. I first looked into the package.json file, and here are some of our imports in context models we are using:

"@tensorflow-models/face-landmarks-detection": "0.0.3",
"@tensorflow/tfjs": "^3.10.0",
"@tensorflow/tfjs-backend-wasm": "^3.10.0",
"@tensorflow/tfjs-converter": "^3.10.0",
"@tensorflow/tfjs-core": "^3.10.0",

Regarding the model portion, there are essentially two choices. There are two: mediapipe and tensorflow.js. Previously, Mediapipe offered an immediate fix in the form of Iris tracking. However, the support has ended. The Kaggle model for that can be found here.

As of right now, there are two models available; one is located here. Tensorflow.js offers a useful face landmark detection solution, which our project eventually uses. I will get to that point later. Using mediapipe is an additional method of utilizing the same model. However, using the same model through Mediapipe is not really necessary because tensorflow.js is already being used in our project.

The second option is to use the mediapipe face landmark detection solution. It is also available for web. Here is the model page. Using the npm package manager, we can access it here. A face landmark detection task is included in the vision task set. All of the options related to left and right iris as well as other attributes are located here under the FaceLandmarker class. We can thus also implement via mediapipe tasks-vision.