antoinelame / GazeTracking

👀 Eye Tracking library easily implementable to your projects
MIT License
1.85k stars 505 forks source link

LeftPupil and Right pupil None #52

Open ctippur opened 3 years ago

ctippur commented 3 years ago

Hello,

Thanks for initiating this effort.

I have a stationary video taken from my phone where I am not moving. I changed the code to read from a file and other part of the code remain the same as example.py.

I see the video being played but the left and right pupil coordinates are None. Has this got to do with ambient light?

I do see that the video is clear. Please see a sample image.

Screen Shot 2020-10-12 at 9 24 41 AM

What am I doing wrong?

S

cap = cv2.VideoCapture('/Users/shekartippur/playground/tflite/myvideo.mp4')

while True:
    ret, frame = cap.read()
    # We send this frame to GazeTracking to analyze it
    gaze.refresh(frame)

    frame = gaze.annotated_frame()
    text = ""

    if gaze.is_blinking():
        text = "Blinking"
    elif gaze.is_right():
        text = "Looking right"
    elif gaze.is_left():
        text = "Looking left"
    elif gaze.is_center():
        text = "Looking center"

    cv2.putText(frame, text, (90, 60), cv2.FONT_HERSHEY_DUPLEX, 1.6, (147, 58, 31), 2)

    left_pupil = gaze.pupil_left_coords()
    right_pupil = gaze.pupil_right_coords()
    cv2.putText(frame, "Left pupil:  " + str(left_pupil), (90, 130), cv2.FONT_HERSHEY_DUPLEX, 0.9, (147, 58, 31), 1)
    cv2.putText(frame, "Right pupil: " + str(right_pupil), (90, 165), cv2.FONT_HERSHEY_DUPLEX, 0.9, (147, 58, 31), 1)

    cv2.imshow("Demo", frame)

    if cv2.waitKey(1) == 27:
        break
keshariS commented 2 years ago

The whole face should be visible in the video for the dlib library to successfully detect the face, and eventually the landmarks