Closed alesubbiah closed 1 year ago
Hi, @alesubbiah I think you can run the program on images with the following command:
ptgaze --mode eth-xgaze --image image00.jpg
The argument --ext
is for specifying the extension of the output video in the case of video input, and is irrelevant in the case of images. So you can simply ignore it when you run the program on images.
Thanks very much for your help @hysts !!!!!! Before I kept getting a 'Cannot connect to the server X' error, but it seems to be working now! :) I'm not able to get a gaze line for each eye, like in some of the examples, only the central line for the gaze. What would be the argument for getting both the gaze lines as well as the face box? Is there a way? Many thanks again for your help!!
@alesubbiah
You can get a gaze vector for each eye if you specify --mode mpiigaze
instead of --mode eth-xgaze
.
@hysts thanks so much!!! This mode is not working for me, I get an output file, but it doesn't have any facial or eye landmarks?
@alesubbiah
It's weird. Doesn't it work for the sample image image00.jpg
in this repo too? Are you getting any errors in your terminal?
Posting your logs may help but I'm not sure.
@hysts thanks so much for your quick response and apologies for the delay! It works with image00, and image01 (although each eye is giving a line in a different direction (see below).
I don't get any errors in the code, I simply get an output with no facebox or eye gaze (this is the output for image02.jpg
for example), and this is what happens with the other images I have tried it with:
I can try simply using the eth-xgaze
mode if it's hard to solve, but is there a way to extend the gaze line (make it much much longer and with an arrow?) I've tried many many of these libraries, and this is the only one that works consistently!! :)
Again thank you so, so much!!!
@alesubbiah
It works with image00, and image01 (although each eye is giving a line in a different direction (see below).
The MPIIGaze model is trained with the MPIIGaze dataset, but due to the limited range of gaze directions in the dataset, the predictions may be inaccurate.
I don't get any errors in the code, I simply get an output with no facebox or eye gaze (this is the output for image02.jpg for example), and this is what happens with the other images I have tried it with:
Ah, I think that's because the face detection is failing. This demo app uses the MediaPipe face detector by default, but the face detection model is trained mainly for selfie-like images, so the detection may fail with different type of images. You can specify --face-detector face_alignment_sfd
to use a different face detector, which is much slower but better at detecting faces in diverse images.
Oh, I forgot to answer this one:
is there a way to extend the gaze line (make it much much longer and with an arrow?)
The length of the gaze vector is specified here. https://github.com/hysts/pytorch_mpiigaze_demo/blob/12f7aef0bc0271f59925e63e53e23d03e355e5a1/ptgaze/demo.py#L221
Thanks so much @hysts ! I will try now with the --face-detector changed. I did have a question about this line 221 for changing the length. I'm not sure exactly what I might be able to do-forgive me! Can I just arbitrarily set a number of pixels here instead? Is it taking this length from original PyTorch functions? Apologies if these seem like a silly question and thanks again for all your help with this :)
@alesubbiah
Can I just arbitrarily set a number of pixels here instead?
You can set a length in meters, not a pixel value. If you specify a length with which the tip of the vector is further than the camera, the visualization will be incorrect.
Is it taking this length from original PyTorch functions?
It's simply used for visualization purposes and has nothing to do with PyTorch.
Hello! Thanks so much for this demo!! I was wondering if you could add .jpg or .png to the type of files that can be used with it? I'm not able to process images (even those that are in the demo samples), because the command line argument only accepts .mp4 or .avi. Many thanks!