geaxgx / depthai_handface

Running Google Mediapipe Face Mesh and Hand Tracking models on Luxonis DepthAI devices
MIT License
60 stars 10 forks source link

How to get irises landmark #3

Open MolianWH opened 2 years ago

MolianWH commented 2 years ago

Thanks for your greate work. The result you showed in README has irises. But I found faces list in demo.py is 468 landmarks. How can I get 478 points with irises?

geaxgx commented 2 years ago

The irises are given by the model with attention, so you need to run: ./demo.py --with_attention or shorter: ./demo.py -a

MolianWH commented 2 years ago

Model with attention speed is only 9fps on my env. Do you have any idea about how to improve it? I found the blob input is 1920*1080 or 4K. Could you provide a lower resolution input blob? Or How to convert a small resolution input blob of facemesh?

geaxgx commented 2 years ago

I confirm that the model with attention is slow. If you don't need hand detection, you can expect a marginal faster fps (~11 fps) with the following command: ./demo.py -a -2

The face landmark model input is fixed to 192x192 (https://github.com/geaxgx/depthai_handface/blob/f51c5a622b01e5c312ec0ab0f6c6d30b8a113490/HandFaceTracker.py#L454) and is not dependent on camera resolution (1920*1080 or 4K).