hysts / pytorch_mpiigaze_demo

Gaze estimation using MPIIGaze and MPIIFaceGaze
MIT License
290 stars 66 forks source link
computer-vision gaze gaze-estimation pytorch

A demo program of gaze estimation models (MPIIGaze, MPIIFaceGaze, ETH-XGaze)

PyPI version Downloads Open In Colab MIT License GitHub stars

With this program, you can run gaze estimation on images and videos. By default, the video from a webcam will be used.

ETH-XGaze video01 result ETH-XGaze video02 result ETH-XGaze video03 result

MPIIGaze video00 result MPIIFaceGaze video00 result

MPIIGaze image00 result

To train a model for MPIIGaze and MPIIFaceGaze, use this repository. You can also use this repo to train a model with ETH-XGaze dataset.

Quick start

This program is tested only on Ubuntu.

Installation

pip install ptgaze

Run demo

ptgaze --mode eth-xgaze

Usage

usage: ptgaze [-h] [--config CONFIG] [--mode {mpiigaze,mpiifacegaze,eth-xgaze}]
              [--face-detector {dlib,face_alignment_dlib,face_alignment_sfd,mediapipe}]
              [--device {cpu,cuda}] [--image IMAGE] [--video VIDEO] [--camera CAMERA]
              [--output-dir OUTPUT_DIR] [--ext {avi,mp4}] [--no-screen] [--debug]

optional arguments:
  -h, --help            show this help message and exit
  --config CONFIG       Config file. When using a config file, all the other commandline arguments
                        are ignored. See
                        https://github.com/hysts/pytorch_mpiigaze_demo/ptgaze/data/configs/eth-
                        xgaze.yaml
  --mode {mpiigaze,mpiifacegaze,eth-xgaze}
                        With 'mpiigaze', MPIIGaze model will be used. With 'mpiifacegaze',
                        MPIIFaceGaze model will be used. With 'eth-xgaze', ETH-XGaze model will be
                        used.
  --face-detector {dlib,face_alignment_dlib,face_alignment_sfd,mediapipe}
                        The method used to detect faces and find face landmarks (default:
                        'mediapipe')
  --device {cpu,cuda}   Device used for model inference.
  --image IMAGE         Path to an input image file.
  --video VIDEO         Path to an input video file.
  --camera CAMERA       Camera calibration file. See https://github.com/hysts/pytorch_mpiigaze_demo/
                        ptgaze/data/calib/sample_params.yaml
  --output-dir OUTPUT_DIR, -o OUTPUT_DIR
                        If specified, the overlaid video will be saved to this directory.
  --ext {avi,mp4}, -e {avi,mp4}
                        Output video file extension.
  --no-screen           If specified, the video is not displayed on screen, and saved to the output
                        directory.
  --debug

While processing an image or video, press the following keys on the window to show or hide intermediate results:

References