Closed rishubjain closed 7 years ago
Hi,
There is not built-in way to do it. It was done on purpose to simplify the flow of the application and because there are some post-processing steps taken on output files to make Action Unit recognition more accurate.
You should be able to change the code fairly easily to incorporate the webcam (as OpenCV uses the same interface for video and webcam reading).
Thanks, Tadas
Hi,
Thanks! Also, is it necessary for the total number of frames to be more than 100 to do any post-processing on the AUs? If so, would the AUs be more accurate per frame if there were 10000 frames to run post-processing on instead of 100?
The reason I ask is because I intend to use FeatureExtraction in real-time, and am wondering if I should implement some kind of buffer so that I can run post-processing on the AUs.
Hi,
There is a mode for running AU prediction in an online mode rather than with post-processing turned on, the model adapts to the person as it goes along. However, it is slightly less accurate. To do this, instead of:
face_analyser.AddNextFrame(captured_image, face_model, time_stamp, false, !det_parameters.quiet_mode);
Use:
face_analyser.AddNextFrame(captured_image, face_model, time_stamp, true, !det_parameters.quiet_mode);
However, if you expect different people to appear in front of the camera you will need to reset the face_analyser
object using - face_analyser.Reset()
, as the adaptation will not transfer from one person to the next.
Thanks, Tadas
Hi Tadas,
I have an issue running FeatureExtraction
using Mac. While the other binaries such as FaceLandmarkVid
run just fine (could see the face being tracked), running the following with avi
files seem to work at the beginning, except that it will just stopped after the following lines and no output is generated:
$ ./FeatureExtraction -root /Users/cleong/OpenFace/videos/ -f 1878_01_002_vladimir_putin.avi -outroot /Users/cleong/OpenFace/test -of test.feat -ov test.avi
Attempting to read from file: /Users/cleong/OpenFace/videos/1878_01_002_vladimir_putin.avi
Device or file opened
Starting tracking
This is not the case with a m4v
video that I quickly recorded with my iPhone, which when using the same command above except that the input avi
is replaced, will give me the full features tracked output. I used ffmpeg to check the codecs.
m4v:
Stream #0:0(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 82 kb/s
Stream #0:1(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 720x1280 [SAR 1:1 DAR 9:16], 7621 kb/s, 30.03 fps, 30 tbr, 600 tbn, 50 tbc (default)
avi:
Stream #0:0: Video: mpeg4 (Simple Profile) (FMP4 / 0x34504D46), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 565 kb/s, 25 fps, 25 tbr, 25 tbn, 25 tbc
Is this a known issue that videos with certain codecs cannot be tracked?
Ben
Hi Ben,
Do other binaries work on the m4v files or is it just FeatureExtraction that is having a problem?
The types of videos OpenFace can read depends on OpenCV (which it uses for reading videos) and the video library it is compiled with or is calling. I know that for Windows it uses ffmpeg, but am not as sure about other operating systems.
Tadas
Hi Ben,
Do other binaries work on the m4v files or is it just FeatureExtraction that is having a problem?
The types of videos OpenFace can read depends on OpenCV (which it uses for reading videos) and the video library it is compiled with or is calling. I know that for Windows it uses ffmpeg, but am not as sure about other operating systems.
Tadas
Hi Tadas,
I tried to use FaceLandmarkVidMulti
and FaceLandmarkVid
with mepg4, but it doesn't work, like it didn't with FeatureExtraction
. However, if I used them with h263, it works. This is not really a big issue for me because I can always use OpenFace with ffmpeg conversion, just that there is an extra step in between. It could be the reason that you mentioned with OpenCV - I would try to look into that further.
Another issue is that I have problems getting the option -outroot
to work for FeatureExtraction
- the binary runs fine, but it always produce the output files in the same directory as the input files. Is there any known reporting about this?
Ben
Hi,
Thanks for reporting the -outroot issue, when I refactored code I did not move the -outroot functionality accordingly, I will fix it soon.
Thanks, Tadas
On 19 July 2016 at 14:56, Chee Wee Leong notifications@github.com wrote:
Hi Tadas,
I tried to use FaceLandmarkVidMulti and FaceLandmarkVid with mepg4, but it doesn't work, like it didn't with FeatureExtraction. However, if I used them with h263, it works. This is not really a big issue for me because I can always use OpenFace with ffmpeg conversion, just that there is an extra step in between. It could be the reason that you mentioned with OpenCV - I would try to look into that further.
Another issue is that I have problems getting the option -outroot to work for FeatureExtraction - the binary runs fine, but it always produce the output files in the same directory as the input files. Is there any known reporting about this?
Ben
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/TadasBaltrusaitis/OpenFace/issues/4#issuecomment-233731267, or mute the thread https://github.com/notifications/unsubscribe-auth/AHMWXJD43vE6xKlWiZtbmQwGTRP2L7A5ks5qXR3ogaJpZM4Ie2on .
Appreciate it. Thank you!
Hi Tadas, Wanted to check if there is any fixed fps while running the FeatureExtraction binary? Because the FPS that is displayed keeps changing. In case I want to write the pose output to a video file, what fps do I set?
Hi,
The fps displayed refers to the processing speed, as FeatureExtraction binary processes every frame the output will not have a varying fps and will use the fps of the recording.
Thanks, Tadas
Thanks Tadas.
Hi Tadas,
I have one more concern regarding the gaze vector. I installed OpenFace on Mac OS and Ubuntu 15.01 versions. I followed the installation steps accordingly. But the featureExtraction output on Mac gives the gaze vector as 0,0,-1 always for both left & right eye. The tracking output does not show gaze vector. While in Ubuntu 15.01, the tracking output shows gaze and the gaze output vector also gets updated. Can I know how to make it work on my mac?
Hi,
That's quite strange. Are you using the exact same video? The gaze is not being computed if the face is too small or too far away from the camera as it would make the prediction unreliable.
-Tadas
Hi,
Yes. I am running on the same video. And video is recorded with webcam in front of the system.
That's really strange, I'll investigate once I have access to a mac (as that is not my dev. platform). If you manage to resolve the issue, please let me know what it was.
Hi Tadas,
Will surely post the answer when I figure it out.
Hi Tadas,
I would like to know if I can use feature extraction module to get features for multiple faces in a video.
Hi,
This question does not seem to be related to the current issue, please open a separate one (I'm trying to organize the issues and questions more). Also if it is a question, rather than a bug, make sure to mark the issue like that as well.
-Tadas
Is there a built-in way to run FeatureExtraction in real-time? According to the wiki there seems like no way to specify a webcam from which to read the images.