Closed CarinaFo closed 5 years ago
Hi Carina,
In its current form, since the method doesn't track the points over time the easiest option is to run it frame-by-frame. This is for example how the small gif from the readme was constructed. If speed is an issue you can batch the frames to further speed it up. You can use ffmpeg
to dump the frames from an image and then call get_landmarks_from_directory
on them. Let me know this works, if not I can simple add a function to handle this case.
Thanks, Adrian
Hi Carina,
In its current form, since the method doesn't track the points over time the easiest option is to run it frame-by-frame. This is for example how the small gif from the readme was constructed. If speed is an issue you can batch the frames to further speed it up. You can use
ffmpeg
to dump the frames from an image and then callget_landmarks_from_directory
on them. Let me know this works, if not I can simple add a function to handle this case.Thanks, Adrian
Can you explain it more clearly, Adrian ? @1adrianb
@huybg-1975 I've compiled the procedure described by @1adrianb in a short script: https://gist.github.com/seva100/71807a726d2d153d5b5a30773999ebd6
It supports either face-alignment or MTCNN detector (--mode
parameter). MTCNN is faster due to the small GPU memory consumption (and hence more threads --n_jobs
can be used) but face_alignment must be more accurate I suppose. The only thing it lacks is the temporal smoothening which can be added separately.
Hello Adrian,
I analyzed the images for our experiment using your algorithm, worked very well. Now I thought about getting landmarks over time as I also took short videos while our subjects produced facial expressions. Is there a way your algorithm can extract landmarks over time from videos? I could always get still images and analyze them but I wondered if there is a more efficient solution? Thank you so much already.
Carina BCCN Berlin