Closed cursan-a closed 1 week ago
Ok I tested everything and anything with the version of "GoogleMLKit/FaceDetection" and I modified the use of faceDetector from VisionCameraFaceDetector.podspec: no success!
I finally updated the iOS version of my phone to 15.8.3 and miracle it works!
GoogleMLKit/FaceDetection requires the minimum iOS 12 version but I guess that for some strange reason it doesn't work with iOS 14 either 🤷♂️
Describe the bug:
I tried to get contours of a face using contourMode: "all" in FaceDetectionOptions but I always got an empty object:
Minimum reproducible example:
It's just a fresh new expo bare workflow project with the following dependencies:
Expected behavior:
I was hopping to get contours point as in ML Kit documentation
I have tryed: Playing with different parameters landmarkMode none/all, trackingEnabled: true/false, performanceMode: accurate/false, etc. EDIT : Since, I tried to force the GoogleMLKit dependency in the podspec file (5.0 & 4.0) but still have the same issue. I ran my app using Xcode and I add some debug log in VisionCameraFaceDetector.swift and face.contours is indeed empty. Everything looks good in FaceDetectorOptions So it's really weird.
Device: