tensorflow / tfjs

A WebGL accelerated JavaScript library for training and deploying ML models.
https://js.tensorflow.org
Apache License 2.0
18.36k stars 1.92k forks source link

Iris tracking demo #4395

Closed Sunyingbin closed 3 years ago

Sunyingbin commented 3 years ago

i saw this blog https://blog.tensorflow.org/2020/11/iris-landmark-tracking-in-browser-with-MediaPipe-and-TensorFlowJS.html image Is there a demo for this project,please send it to me,thank you

vladmandic commented 3 years ago

@Sunyingbin i'm using it in my library https://github.com/vladmandic/human

Sunyingbin commented 3 years ago

@Sunyingbin i'm using it in my library https://github.com/vladmandic/human

hi,I test the Live Demo, iris tracking don't work, No effect of the picture I provided above.

vladmandic commented 3 years ago

@Sunyingbin what effect? coloring of iris? that is trivial once you have actual coordinates of iris.
example i've send is about using the model to get results - what you want to do with results is up to you.

google-ml-butler[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you.

google-ml-butler[bot] commented 3 years ago

Closing as stale. Please @mention us if this needs more attention.

lghasemzadeh commented 3 years ago

@Sunyingbin i'm using it in my library https://github.com/vladmandic/human

Hello,

Is is possible to install it on Ubuntu and use it in Python scripts? I want to develop a simple gaze estimation algorithm in python (I don't know Java).

Thx

vladmandic commented 3 years ago

@lghasemzadeh you have to find tensorflow saved model as graph models cannot be used with python directly.
or you could use something like https://github.com/patlevin/tfjs-to-tf to convert tfjs graph model to tf saved model that can be used in python.

lghasemzadeh commented 3 years ago

@lghasemzadeh you have to find tensorflow saved model as graph models cannot be used with python directly. or you could use something like https://github.com/patlevin/tfjs-to-tf to convert tfjs graph model to tf saved model that can be used in python.

Thank you for the response.

There was another option, using Mediapipe, as far as I know it is a python library. But it seems it is an older version and is not as robust as the new one (https://blog.tensorflow.org/2020/11/iris-landmark-tracking-in-browser-with-MediaPipe-and-TensorFlowJS.html). I am new in this field so I don't get why I need graph models, but I will learn about it. I actually need a model/library/algorithm that gives me accurate and robust facial landmarks or facemesh. then I want to use the ID and coordinates of eye region landmarks to find İris and then estimate the gaze direction. I have used several different libraries and open source algorithms such as dlib facial landmark and tensorflow facial landmark but they are not robust at all, and I can not get an accurate gaze direction detection as the output. but this Iris landmark tracking is the exactly thing that I am looking for, but I can not implement it in python, it is surprising that such a library was not provided for python as well, is there a reason for it? and do you have any idea about the issue that I explained?

vladmandic commented 3 years ago

different types of tensorflow can use different types of models. tfjs uses graph or layer, tensorflor for python uses native saved format, but can also use keras or tflite, etc.

current version of mediapipe solution includes models in tflite format which you could use, it's just optimized for edge with lower precision (quantized to int). if you want native saved model to use with python, you can either grab my graph models from human library and convert them to saved format or search around for them directly.

also, to use iris model, you first need face detection. any would work, but it's designed to work with blazeface model (one for detection and one for 468-point facemesh and then iris model augments results with additional 10 points).

lghasemzadeh commented 3 years ago

different types of tensorflow can use different types of models. tfjs uses graph or layer, tensorflor for python uses native saved format, but can also use keras or tflite, etc.

current version of mediapipe solution includes models in tflite format which you could use, it's just optimized for edge with lower precision (quantized to int). if you want native saved model to use with python, you can either grab my graph models from human library and convert them to saved format or search around for them directly.

also, to use iris model, you first need face detection. any would work, but it's designed to work with blazeface model (one for detection and one for 468-point facemesh and then iris model augments results with additional 10 points).

@vladmandic Thank you. I have several questions :) I was not able to run your demo! what should I do? Have you ever tried this package https://blog.tensorflow.org/2020/11/iris-landmark-tracking-in-browser-with-MediaPipe-and-TensorFlowJS.html on grayscale input/video/stream, not RGB? Have you ever tested your code with external camera, not webcam (for example using an intel camera)? As you mentioned your model covers '3D Face Detection, Face Embedding & Recognition, Body Pose Tracking, Hand & Finger Tracking, Iris Analysis, Age & Gender & Emotion Prediction & Gesture Recognition' what do you mean by Iris analyzing? I would appreciate if you answer.

lghasemzadeh commented 3 years ago

@vladmandic I just run your demo, and I got more questions. Is it ok for you to talk here or we can continue in linkedin. of course if you have time to answer my questions :)

vladmandic commented 3 years ago

I was not able to run your demo! what should I do?

Need more info on that? What's not working? Can you run demo via link provided in README (hosted on git pages)?

Have you ever tried this package https://blog.tensorflow.org/2020/11/iris-landmark-tracking-in-browser-with-MediaPipe-and-TensorFlowJS.html on grayscale input/video/stream, not RGB?

Yes, it works. Btw, if you look at credits, you'll see I'm using the same model in Human.

Have you ever tested your code with external camera, not webcam (for example using an intel camera)?

I have. The only issue is video stream format. Any stream format that can be played with HTML <video> element will work with my library. Some cameras prefer to stream in RTSP format which is unsupported by browsers, in which case in needs to be transcoded to different format (can be done on the fly, but that must be done on the server side), typically using software like ffmpeg. Demo does not include code for external cameras, but I have done it elsewhere.

what do you mean by Iris analyzing?

Face detection detects faces. Facemesh then detects 468 points on the face. Iris then augments those results with additional 5 points per iris for total of 478 points. Additionally there is gesture.js which does simple analysis of those results to check for things like if user is facing left/right/up/down and/or looking at camera, etc.

For additional questions about my library, perhaps best to open issue in my git repository.

lghasemzadeh commented 3 years ago

Thank you For the third question, if the model will be converted to python, do we have again problem with camera stream format since it doesn't need to work in browser anymore? Is it necessary to be HTML format again?

vladmandic commented 3 years ago

something needs to be able to parse the stream - so if a video library is imported into python that does the job (such as OpenCV for example), great.

lghasemzadeh commented 3 years ago

yes exactly, thx

Rsun24 commented 4 months ago

@Sunyingbin what effect? coloring of iris? that is trivial once you have actual coordinates of iris. example i've send is about using the model to get results - what you want to do with results is up to you.

can you send me the example too? thanks in advance