-
Plus blink loss
Note that you'll need to preprocess your data to provide
the left eye image,
right eye image,
2D keypoints,
target gaze, and
target blink for each sample during trainin…
-
hello, respected friend!
thanks a lot for your work.
Now I wonder how can I get the trained keras models,
In the source, the kereas model file is empty.
Thank you very much.
Best Regards
-
Hi! I'm an undergraduate cognitive science student using tcow to track my experiment videos. I'm interested in extracting data for everything inside the contour of my target objects, not just their ce…
-
Hey,
We've been using with the WebGazer.js library on our web-app to track eye-gaze. So far, this library has not generated any integration issues with our app.
However, we're currently in the …
-
Currently, WebGazer runs as fast as it can on whatever image is in the video element canvas.
Ideally, it would only run when a new frame is added, to reduce redundant work.
This can be tracked via…
-
- Installing: docker way on Ubuntu
- execute:
```
/home/openface-build/build/bin/FeatureExtraction -f /home/openface-build/video/my_video.mp4 -out_dir /home/openface-build/output -2Dfp -3Dfp -pdmp…
-
We found that all of these 3 parameters are False so that eye iris, eyeball center and gaze could not be found. Can someone give some suggestions? Thanks a lot!!!!!
-
I wanted to have a list of possible tutorials that are useful to both researchers and students. I'm considering having a separate issue for each tutorial or one giant issue here with a task list. At l…
-
I found this work really fascinating and I was wondering if you could provide code for computing the gold standard? I tried computing it using pysaliency but I must be doing something wrong because it…
-
I'm looking to create a system that analyses the Facial Action Units from a webcam feed and applies them to a 3D character model in a Unity application in real-time. To use OpenFace for this I either …