Closed dimitar-bestinc closed 3 years ago
Do you mind clarify which API you are using? Face detection?
Hey, @bcdj, thank you for your reply Yeah I meant Face Detection
I've used the react-native-camera library which used Google ML Kit, and now I am on the way to choosing a mobile web(webview) version for the same features of face detection.
leftEyeOpenProbability
, rightEyeOpenProbability
, rollAngle
, yawAngle
and smilingProbability
. How can I get those parameters?You could take a closer look at the MediaPipe Face Detection and Face Mesh documentation. https://google.github.io/mediapipe/getting_started/javascript I just gave a glance and unfortunately it seems it doesn't provide those probabilities directly.
You should be able to calculate those probabilities by yourself based on the output of the Face Mesh API. Will take some effort though.
@bcdj , ok then it seems to me MediaPipe
is not web version equivalent for Google ML Kit.
I'm not sure why Google ML Kit does not support web?
Do you have any suggestion? Thanks
ML Kit only provides Android and iOS APIs. Unfortunately, you have to pick other options for Web and MediaPipe is one of them.
Hi, community! I am building a challenge-response-based liveness checking system for a selfie using a mobile camera. The source is the device's live camera stream. I already finished the mobile app version using the react-native-camera library which internally uses Google ML Kit.
Now I am going to build a mobile web app using the webview and I am thinking of two options; According to (Can I Use Google ML Kit for translation for my web app?), ML Kit does not support web apps today, it is designed for use in native Android and iOS apps. So I am considering the following 2 options.
The first one is Google Cloud Vision API. The API response result mostly overlaps with Google ML Kit, but it means many API requests in real-time through the Internet. So I wonder if this can be a good option for a live camera stream.
The second one is Google On-Device ML(https://developers.google.com/learn/topics/on-device-ml). But the API response result is a low-level solution(only returns eight vertices) that requires customization(e.g. yaw, pitch, and angle) before it works for the use case.
I've been stuck with this point. Do you have any solution for me? Thank you in advance.