flutter-ml / google_ml_kit_flutter

A flutter plugin that implements Google's standalone ML Kit
MIT License
952 stars 727 forks source link

Flutter. Face Detection. No face detected on Samsung Galaxy A23 Android 12: Access denied finding property "ro.mediatek.platform" #287

Closed svonidze closed 1 year ago

svonidze commented 2 years ago

The application written with Flutter, the Face Detection works ok on many other devices but does not work on Samsung Galaxy A23 Android 12.

Found this line in the Android Studio log

E/libc    (15411): Access denied finding property "ro.mediatek.platform"

Google ML kit is not compatible with the processor?

Full log

I/Camera  (15411): startPreview
I/CameraManagerGlobal(15411): Camera 1 facing CAMERA_FACING_FRONT state now CAMERA_STATE_OPEN for client com.skaiscan.app API Level 2
I/Camera  (15411): CameraCaptureSession onConfigured
I/Camera  (15411): Updating builder settings
D/Camera  (15411): Updating builder with feature: ExposureLockFeature
D/Camera  (15411): Updating builder with feature: ExposurePointFeature
D/Camera  (15411): Updating builder with feature: ZoomLevelFeature
D/Camera  (15411): Updating builder with feature: AutoFocusFeature
D/Camera  (15411): Updating builder with feature: NoiseReductionFeature
I/Camera  (15411): updateNoiseReduction | currentSetting: fast
D/Camera  (15411): Updating builder with feature: FocusPointFeature
D/Camera  (15411): Updating builder with feature: ResolutionFeature
D/Camera  (15411): Updating builder with feature: SensorOrientationFeature
D/Camera  (15411): Updating builder with feature: FlashFeature
D/Camera  (15411): Updating builder with feature: ExposureOffsetFeature
D/Camera  (15411): Updating builder with feature: FpsRangeFeature
I/Camera  (15411): refreshPreviewCaptureSession
I/CameraManagerGlobal(15411): Camera 1 facing CAMERA_FACING_FRONT state now CAMERA_STATE_ACTIVE for client com.skaiscan.app API Level 2
I/Camera  (15411): closeCaptureSession
I/BufferQueueProducer(15411): [SurfaceTexture-0-15411-0](id:3c3300000002,api:4,p:1269,c:15411) queueBuffer: queued for the first time.
I/CameraManagerGlobal(15411): Camera 1 facing CAMERA_FACING_FRONT state now CAMERA_STATE_IDLE for client com.skaiscan.app API Level 2
I/Camera  (15411): startPreviewWithImageStream
I/Camera  (15411): CameraCaptureSession onConfigured
I/Camera  (15411): Updating builder settings
D/Camera  (15411): Updating builder with feature: ExposureLockFeature
D/Camera  (15411): Updating builder with feature: ExposurePointFeature
D/Camera  (15411): Updating builder with feature: ZoomLevelFeature
D/Camera  (15411): Updating builder with feature: AutoFocusFeature
D/Camera  (15411): Updating builder with feature: NoiseReductionFeature
W/om.skaiscan.ap(15411): Long monitor contention with owner main (15411) at void android.hardware.camera2.impl.CameraDeviceImpl.waitUntilIdle()(CameraDeviceImpl.java:1339) waiters=0 in void android.hardware.camera2.impl.CameraDeviceImpl$4.run() for 447ms
I/Camera  (15411): updateNoiseReduction | currentSetting: fast
D/Camera  (15411): Updating builder with feature: FocusPointFeature
D/Camera  (15411): Updating builder with feature: ResolutionFeature
D/Camera  (15411): Updating builder with feature: SensorOrientationFeature
D/Camera  (15411): Updating builder with feature: FlashFeature
D/Camera  (15411): Updating builder with feature: ExposureOffsetFeature
D/Camera  (15411): Updating builder with feature: FpsRangeFeature
I/Camera  (15411): refreshPreviewCaptureSession
I/Camera  (15411): CameraCaptureSession onClosed
I/CameraManagerGlobal(15411): Camera 1 facing CAMERA_FACING_FRONT state now CAMERA_STATE_ACTIVE for client com.skaiscan.app API Level 2
I/BufferQueueProducer(15411): [SurfaceTexture-0-15411-0](id:3c3300000002,api:4,p:1269,c:15411) queueBuffer: queued for the first time.
I/BufferQueueProducer(15411): [ImageReader-1280x720f23m1-15411-1](id:3c3300000004,api:4,p:1269,c:15411) queueBuffer: queued for the first time.
I/flutter (15411): ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
I/flutter (15411): │ 🐛 onEvent -- HomeBloc,
I/flutter (15411): │ 🐛 HomeCameraFaceChecked, message: cameraImage: ImageFormatGroup.yuv420
I/flutter (15411): └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
I/DynamiteModule(15411): Considering local module com.google.mlkit.dynamite.face:10000 and remote module com.google.mlkit.dynamite.face:0
I/DynamiteModule(15411): Selected local version of com.google.mlkit.dynamite.face
D/TransportRuntime.SQLiteEventStore(15411): Storing event with priority=VERY_LOW, name=FIREBASE_ML_SDK for destination cct
D/TransportRuntime.JobInfoScheduler(15411): Upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning...
I/om.skaiscan.ap(15411): Background concurrent copying GC freed 19230(1160KB) AllocSpace objects, 22(14MB) LOS objects, 45% free, 28MB/52MB, paused 406us,119us total 120.821ms
D/TransportRuntime.SQLiteEventStore(15411): Storing event with priority=DEFAULT, name=FIREBASE_ML_SDK for destination cct
D/TransportRuntime.JobInfoScheduler(15411): Scheduling upload for context TransportContext(cct, DEFAULT, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) with jobId=-1387388017 in 629833ms(Backend next call timestamp 1656426053545). Attempt 1
I/DynamiteModule(15411): Considering local module com.google.mlkit.dynamite.face:10000 and remote module com.google.mlkit.dynamite.face:0
I/DynamiteModule(15411): Selected local version of com.google.mlkit.dynamite.face
V/FaceDetectorV2Jni(15411): initialize.start()
I/native  (15411): I0628 21:10:23.745867   15574 face_detector_v2_jni.cc:33] Loading models_bundled/fssd_medium_8bit_v5.tflite
D/TransportRuntime.SQLiteEventStore(15411): Storing event with priority=VERY_LOW, name=FIREBASE_ML_SDK for destination cct
I/native  (15411): I0628 21:10:23.751617   15574 face_detector_v2_jni.cc:33] Loading models_bundled/fssd_medium_8bit_gray_v5.tflite
D/TransportRuntime.JobInfoScheduler(15411): Upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning...
I/native  (15411): I0628 21:10:23.756516   15574 face_detector_v2_jni.cc:33] Loading models_bundled/fssd_anchors_v5.pb
I/native  (15411): I0628 21:10:23.756945   15574 face_detector_v2_jni.cc:33] Loading models_bundled/LMprec_600.emd
I/native  (15411): I0628 21:10:23.777088   15574 face_detector_v2_jni.cc:33] Loading models_bundled/BCLlefteyeclosed_200.emd
I/native  (15411): I0628 21:10:23.778020   15574 face_detector_v2_jni.cc:33] Loading models_bundled/BCLrighteyeclosed_200.emd
I/native  (15411): I0628 21:10:23.778262   15574 face_detector_v2_jni.cc:33] Loading models_bundled/BCLjoy_200.emd
I/native  (15411): I0628 21:10:23.778749   15574 face_detector_v2_jni.cc:33] Loading models_bundled/MFT_fssd_accgray.pb
I/native  (15411): I0628 21:10:23.779243   15574 face_detector_v2_jni.cc:33] Loading models_bundled/contours.tfl
I/native  (15411): I0628 21:10:23.811448   15574 face_detector_v2_jni.cc:33] Loading models_bundled/blazeface.tfl
I/tflite  (15411): Initialized TensorFlow Lite runtime.
V/FaceDetectorV2Jni(15411): initialize.end()
V/FaceDetectorV2Jni(15411): initialize.start()
I/native  (15411): I0628 21:10:23.842092   15574 face_detector_v2_jni.cc:33] Loading models_bundled/fssd_medium_8bit_v5.tflite
I/native  (15411): I0628 21:10:23.842937   15574 face_detector_v2_jni.cc:33] Loading models_bundled/fssd_medium_8bit_gray_v5.tflite
I/native  (15411): I0628 21:10:23.845093   15574 face_detector_v2_jni.cc:33] Loading models_bundled/fssd_anchors_v5.pb
I/native  (15411): I0628 21:10:23.845889   15574 face_detector_v2_jni.cc:33] Loading models_bundled/LMprec_600.emd
I/native  (15411): I0628 21:10:23.863184   15574 face_detector_v2_jni.cc:33] Loading models_bundled/BCLlefteyeclosed_200.emd
I/native  (15411): I0628 21:10:23.864038   15574 face_detector_v2_jni.cc:33] Loading models_bundled/BCLrighteyeclosed_200.emd
I/native  (15411): I0628 21:10:23.864741   15574 face_detector_v2_jni.cc:33] Loading models_bundled/BCLjoy_200.emd
I/native  (15411): I0628 21:10:23.865313   15574 face_detector_v2_jni.cc:33] Loading models_bundled/MFT_fssd_fastgray.pb
I/native  (15411): I0628 21:10:23.865536   15574 face_detector_v2_jni.cc:33] Loading models_bundled/contours.tfl
I/native  (15411): I0628 21:10:23.878275   15574 face_detector_v2_jni.cc:33] Loading models_bundled/blazeface.tfl
V/FaceDetectorV2Jni(15411): initialize.end()
V/FaceDetectorV2Jni(15411): detectFacesImageByteArray.start()
E/libc    (15411): Access denied finding property "ro.mediatek.platform"
V/FaceDetectorV2Jni(15411): detectFacesImageByteArray.end()
V/FaceDetectorV2Jni(15411): detectFacesImageByteArray.start()
V/FaceDetectorV2Jni(15411): detectFacesImageByteArray.end()
D/TransportRuntime.SQLiteEventStore(15411): Storing event with priority=VERY_LOW, name=FIREBASE_ML_SDK for destination cct
I/flutter (15411): ┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
I/flutter (15411): │ #0   LoggerService.info (package:skaiscan_log_service/skaiscan_log_service.dart:46:12)
I/flutter (15411): │ #1   HomeBloc._onCameraFaceChecked (package:skaiscan/pages/home/bloc/home_bloc.dart:127:19)
I/flutter (15411): ├┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄
I/flutter (15411): │ 💡 Faces found: 0
I/flutter (15411): └───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
D/TransportRuntime.JobInfoScheduler(15411): Upload for context TransportContext(cct, VERY_LOW, MSRodHRwczovL2ZpcmViYXNlbG9nZ2luZy5nb29nbGVhcGlzLmNvbS92MGNjL2xvZy9iYXRjaD9mb3JtYXQ9anNvbl9wcm90bzNc) is already scheduled. Returning...
svonidze commented 2 years ago

Similar problem with Samsung A23 https://github.com/bharat-biradar/Google-Ml-Kit-plugin/issues/285

fbernaly commented 2 years ago

This seems to be related to my comment on issue https://github.com/bharat-biradar/Google-Ml-Kit-plugin/issues/285

It is related to the output coming from the camera plugin.

LeeLoHoon commented 2 years ago

Similar problem with Samsung S20

dragongesa commented 1 year ago

@fbernaly So how to fix this issue? any suggestion?

is that because the resolution output coming from camera that give us the error ?

acoutts commented 1 year ago

This sounds related to my issue here: https://github.com/flutter/flutter/issues/118350

svonidze commented 1 year ago

this helped https://github.com/flutter/packages/pull/3277 !

acoutts commented 1 year ago

Glad it worked! The issue ended up being nobody was implementing YUV to NV21 correctly. The 'concatenatePlanes' approach just doesn't cut it for all cases.

fbernaly commented 1 year ago

@acoutts , @svonidze : what was the fix for this?

ghost commented 1 year ago

Request streaming in NV21 format from the camera and pass that to the mlkit plugin.

fbernaly commented 1 year ago

Do we need to update anything in our plugin?

ghost commented 1 year ago

Do we need to update anything in our plugin?

  1. Bump the example app to the latest version of the camera containing NV21 (0.10.5)

  2. Modify the camera controller constructor to stream NV21 on android. This way on both platforms, there will be just a single-plane frame coming back from the native side. On iOS, bgra8888 is already single-plane and appropriate to send directly into MLKit. On android, NV21 is the same.

  Future _startLiveFeed() async {
    final camera = cameras[_cameraIndex];
    _controller = CameraController(
      camera,
      ResolutionPreset.high,
      enableAudio: false,
      inputImageFormat: defaultTargetPlatform == TargetPlatform.android
          ? InputImageFormat.nv21
          : InputImageFormat.bgra8888,
    );
// ...
  1. The function that processes frames can then be simplified simply into this:
Future _processCameraImage(CameraImage image) async {
    final bytes = cameraImage.planes.first.bytes;
// ...

Maybe add some checks that the plane length is 1 but for bgra8888 and NV21 it will always be one plane.

Of course people are still free to stream in YUV420 (the default format) and do the conversion themselves, but it needs to be noted that this routine to concatenate the planes is not a working solution and will not work when the device camera includes padding in the image plane data like the devices mentioned in this issue.

What will happen is the bytesPerRow will not equal the frame width and the resulting concatenated image won't be a valid frame.

    final WriteBuffer allBytes = WriteBuffer();
    for (final Plane plane in image.planes) {
      allBytes.putUint8List(plane.bytes);
    }
    final bytes = allBytes.done().buffer.asUint8List();

You can see my examples here where I converted these frames into RGB to see what they actually look like to the MLKit plugin and it's very clear they are not valid images and is why the image recognition doesn't work on them.

https://github.com/flutter/flutter/issues/118350

The official MLKit example app by google avoids this issue by using the CameraX camera and streaming directly in NV21, which the latest camera plugin can now do using my PR that was merged.

fbernaly commented 1 year ago

Thanks @acoutts , I think this will solve most of the complains and issues reported when using this plugin. I will work on this changes ASAP.

fbernaly commented 1 year ago

Based on @acoutts-nydig comments, I am updating InputImage with some breaking changes to only support nv21 and bgra8888 from Camera plugin in this PR: https://github.com/flutter-ml/google_ml_kit_flutter/pull/454. It will be released later this week.

rraayy commented 1 year ago

Does there have any way to crop the face when the controller is set to NV21 ? Meet some problem when convert to image.

Dhruv-SalaryBox commented 7 months ago

How to use the InputImageFormat.nv21 with the following code -

final XFile? photo = await picker.pickImage( source: ImageSource.camera, preferredCameraDevice: CameraDevice.front)

What should be the right syntax? @fbernaly @ghost