I am currently using the @capacitor-mlkit/face-detection plugin to verify users' selfies. However, many users upload images of chairs, walls, or other irrelevant objects instead of their faces. Additionally, some users rotate their heads and upload images showing only their right or left ear. This leads to unsuccessful selfie verifications. To address this, I need a way to provide real-time feedback to users, prompting them to make their face fully visible in the camera preview.
The Capacitor MLKit plugins are intended to make the MLKit SDKs accessible for Capacitor apps. The integration of Apple's Vision framework is therefore out of scope.
Plugin(s)
Current problem
I am currently using the @capacitor-mlkit/face-detection plugin to verify users' selfies. However, many users upload images of chairs, walls, or other irrelevant objects instead of their faces. Additionally, some users rotate their heads and upload images showing only their right or left ear. This leads to unsuccessful selfie verifications. To address this, I need a way to provide real-time feedback to users, prompting them to make their face fully visible in the camera preview.
Preferred solution
please Integrate real-time face tracking using Apple's Vision framework into the @capacitor-mlkit/face-detection library, as detailed in Apple's documentation on tracking the user's face in real time.
Alternative options
No response
Additional context
Did omit this feature because of increased app size ?
No response
Before submitting