antoinelame / GazeTracking

👀 Eye Tracking library easily implementable to your projects
MIT License
2.05k stars 542 forks source link

Hello, what is the algorithm based on this project, is there a related paper? #5

Open ZZH0908 opened 5 years ago

lintangsutawika commented 5 years ago

The algorithm is straightforward. 1) Isolate eyes from face. This is done by taking specific landmarks of the face using uses shape_predictor_68_face_landmarks.dat. The process in in eye.py 2) Detect Iris using contours at pupil.py 3) Measure the coordinates of the iris to determine the position of gaze at gaze_tracking.py

antoinelame commented 5 years ago

I will definitely write a paper about the algorithm in the next few days. But @lintangsutawika summed it up well the mains steps.

When I isolated an eye (thanks to shape_predictor_68_face_landmarks.dat), here's how I detect this iris:

  1. I blur the image very slightly to remove any noise
  2. I erode the image to remove backlights
  3. I binarize the image to have only black and white pixels (no grayscale)
  4. I detect the contour and calculate centroid (I consider that it's the pupil position)

Pupil Detection Algorithm

But when I binarize the image, I need to pass a threshold value to separate white and black pixels. This value varies a lot depending on the people and webcams (a range of about 10 to 75). Pupils detection can be very accurate with the right value, or inaccurate if the value is wrong.

Thresholds Values

I didn't want to bother users by forcing them to find this threshold value themselves. And anyway, it's not great that this value is hard-coded in any script. So I created an automatic calibration algorithm (76aa16e18196b9a646941cafc6cf1ca0d05774b5) to find the right threshold value for the user/webcam.

I did some statistics and noticed that the size of the iris is always about 48% of the surface of the eye for all people (when they are looking to the center). Thresholds values to binarize images can differ a lot from person to person, but iris sizes are very stable.

So on the current version, here's how works the automatic calibration:

It works well with the people I know, but I would like more feedback from different people.

Samagra12 commented 5 years ago

Hi, I want to start off by saying that this is the best eye tracking using a webcam which is light, fast and super responsive. But the only problem that I am having is that when I run the example.py the pupils are detected and the gaze tracking keeps flashing "looking right". Thanks in advance.

antoinelame commented 5 years ago

Hi @Samagra12 Thanks for your feedback!

In previous versions of the library (only with Python 2), even if the detection of pupils was good, the program always indicated the direction of gaze as going to the right. But I made a commit a few hours ago to fix it (a062b73aebc24d45234dda075c3742ea77d62d92). Check your version, I think you cloned the project before this commit.

If you still have the problem, tell me and we will try to solve it

Samagra12 commented 5 years ago

Hi, Thanks for replying. I downloaded the newer version and now it works fine, it flashes "looking centre". I just wanted to know that it still is not very accurate on my system as it is seen in your demo, so how can I make it more accurate? Thanks in advance.

On Fri, Mar 15, 2019 at 1:03 AM antoinelame notifications@github.com wrote:

Hi @Samagra12 https://github.com/Samagra12 Thanks for your feedback!

In the latest versions of the library (only with Python 2), even if the detection of pupils was good, the program always indicated the direction of gaze as going to the right. But I made a commit a few hours ago to fix it ( a062b73 https://github.com/antoinelame/GazeTracking/commit/a062b73aebc24d45234dda075c3742ea77d62d92). Check your version, I think you cloned the project before this commit.

If you still have the problem, tell me and we will try to solve it

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/antoinelame/GazeTracking/issues/5#issuecomment-473026239, or mute the thread https://github.com/notifications/unsubscribe-auth/AnEabmjqEkSPCBZT7ssC1lBVBNsBgNxtks5vWqQSgaJpZM4bqPb2 .

antoinelame commented 5 years ago

@Samagra12

If you want a better accuracy on the gaze direction (left, center, right): On the gaze_tracking.py file, go to functions is_right() and is_left(), you can change threshold values (0.35 and 0.65).

If you want a better accuracy on the detection of pupils: On pupil.py file, you can try to pass your own threshold value to cv2.threshold(). But I'm not sure it's possible to find a better value than the auto-calibration algorithm.

In any case, a good webcam and a bright environment help a lot. Also, you can send me videos samples by email. I would use it to improve the algorithm.

ZZH0908 commented 5 years ago

I will definitely write a paper about the algorithm in the next few days. But @lintangsutawika summed it up well the mains steps.

When I isolated an eye (thanks to shape_predictor_68_face_landmarks.dat), here's how I detect this iris:

  1. I blur the image very slightly to remove any noise
  2. I erode the image to remove backlights
  3. I binarize the image to have only black and white pixels (no grayscale)
  4. I detect the contour and calculate centroid (I consider that it's the pupil position)

Pupil Detection Algorithm

But when I binarize the image, I need to pass a threshold value to separate white and black pixels. This value varies a lot depending on the people and webcams (a range of about 10 to 75). Pupils detection can be very accurate with the right value, or inaccurate if the value is wrong.

Thresholds Values

I didn't want to bother users by forcing them to find this threshold value themselves. And anyway, it's not great that this value is hard-coded in any script. So I created an automatic calibration algorithm (76aa16e) to find the right threshold value for the user/webcam.

I did some statistics and noticed that the size of the iris is always about 48% of the surface of the eye for all people (when they are looking to the center). Thresholds values to binarize images can differ a lot from person to person, but iris sizes are very stable.

So on the current version, here's how works the automatic calibration:

  • During the first 20 frames, I send them to the Calibration class
  • I try to binarize the frame with different thresholds values from 5 to 100 (with a step of 5) and I calculate iris sizes
  • For each frame, I save the value that gives the closest iris size to 48%
  • When the first 20 frames are analyzed, the final threshold value is the average of the best 20 values

It works well with the people I know, but I would like more feedback from different people.

I thought you used the algorithm from Accurate Eye Centre Localisation by Means of Gradients。 Have you compared these two algorithms?

bensalahmara commented 3 years ago

HI what is the face detection algorithme used !!!

hisham678 commented 3 years ago

Hi did you write a paper?