Open invicted-ceo opened 6 years ago
Can you try to search for facefilter-desktop
file in _builds
directory?
Having the same issue. I found the facefilter-desktop
file in _builds
, but I'm having trouble running it now. Which model file should I use? When I run it with a few different model files I get the same error message:
Failed to read a video frame with requested dimensions, received 1920x1080 expected 0x0
I'm passing a .mov video file. The videos are limited to just the eyes, so I don't necessarily need the full face fitting, just the eye model fitting. Any advice would be appreciated
Is there documentation on just getting the desktop applications running?
There is a readme w/ instructions for this console app: https://github.com/elucideye/drishti/tree/master/src/app/hci
The desktop facefilter app should be fairly similar. I noticed that one isn't currently installed by cmake, and it should be. I'll fix that. As ruslo mentioned, it should be in the build tree. I can adapt that one for the facefilter app.
You can download the models manually as shown in that readme. They are also installed internally as part of the builds process to support the tests.
I'm passing a .mov video file. The videos are limited to just the eyes, so I don't necessarily need the full face fitting, just the eye model fitting. Any advice would be appreciated
The facefilter app is geared towards selfie video, and it assumes a full face is visible without any strong FOV related clipping. It runs: (1) face detection (using accelerated ACF models); (2) coarse landmarks at low resolution to localize the eyes; (3) eye models. From what you describe, I don't think it will work for you as is.
If you have video that only contains eyes, you will need to adapt this to get it to work. FWIW, the eye models will work on eye crop images if you already have reasonable bounding boxes. You can use the installed drishti-eye
console application on eye crop images with a 4:3 aspect ratio and padding similar to what is shown on the README.
One example calling the model in the SDK can be seen in the following test:
If you want to run on video and you don't have a full face visible, then you can probably run an object detector directly to find bounding boxes for the eyes, and then run the eye model regression on those using the SDK.
The ACF repo does have a few eye models trained on high res images from unsplash.com. That might work for you. See:
https://github.com/elucideye/acf/releases/download/v0.0.0/acf_unsplash_60x40_eye_any_color_d4.cpb
https://github.com/elucideye/acf/releases/download/v0.0.0/acf_unsplash_60x40_eye_any_gray_d4.cpb That detector is used in this repository via the acf package.
I'm having trouble finding the desktop application after compiling. I'm compiling using the commands in the readme as follows for mac:
cmake -H. -B_builds -GXcode -DHUNTER_STATUS_DEBUG=ON -DDRISHTI_BUILD_EXAMPLES=ON
cmake --build _builds --config Release
When I look at _builds/ I find a few executables, but it doesn't seem like there is anything that's immediately usable. Is there documentation on just getting the desktop applications running? I'm looking to the examples on some test videos.
Do I need to download the model files externally from the resource page?