This is a short & simple project of eye tracking and writing. It makes use of machine learning-based facial mapping (landmarks) with dlib + python + openCV, with eyes projection on a virtual keyboard. The algorithm works in real-time on the video-stream from the webcam.
Feel free to use the code, play around with it, improve it, enjoy, learn, and think new ideas!
Video example https://www.youtube.com/watch?v=_vMNbsJFbqM
Similar project (drawing with eyes): https://www.youtube.com/watch?v=MQlzBQYI_hI
Logic flow:
(then, frame by frame)
Notes: The python script is highly simplified for didactic purposes (high-school / BSc level), with a basic structure (only main and functions). All the necessary functions are in the single file eye_key_funcs.py. The virtual keyboard is in projected_keyboard.py.
Requirements:
More useful information about Facial mapping (landmarks) with Dlib + python can be found here: https://towardsdatascience.com/facial-mapping-landmarks-with-dlib-python-160abcf7d672 https://www.pyimagesearch.com/2017/04/03/facial-landmarks-dlib-opencv-python/
The trained model file shape_predictor_68_face_landmarks.dat can be found here: https://github.com/davisking/dlib-models/blob/master/shape_predictor_68_face_landmarks.dat.bz2 and all the details here: https://github.com/davisking/dlib-models