flutter-ml / google_ml_kit_flutter

A flutter plugin that implements Google's standalone ML Kit
MIT License
996 stars 752 forks source link

Face detection not working on front camera (iOS) #570

Open giordy16 opened 10 months ago

giordy16 commented 10 months ago

Face detection not working on front camera (iOS)

I am using the example app, and when I try to take a picture using my front camera, the faces found is always 0. If use the back camera, everything works.

Steps to reproduce the behavior:

  1. Go to 'Face detection'
  2. Click on the bottom left icon, then on 'Take a picture'
  3. Take a selfie with the front camera
  4. See error

Platform:

giordy16 commented 10 months ago

EDIT: it's actually working sometimes. On the UI of the front camera, there is little button to do a little zoom in/out. If I take the picture with the zoom out the faces are detected, if the camera has that little zoom in (which is the default setting), it doesn't work

fbernaly commented 10 months ago

For face recognition, you should use an image with dimensions of at least 480x360 pixels. For ML Kit to accurately detect faces, input images must contain faces that are represented by sufficient pixel data. In general, each face you want to detect in an image should be at least 100x100 pixels. source: https://developers.google.com/ml-kit/vision/face-detection/android#input-image-guidelines

giordy16 commented 10 months ago

When the front camera is in zoom-in mode, the picture has a dimension of 2316×3088, and my face is NOT detected. When the camera is in zoom-out mode, the picture has a dimension of 3024×4032, and my face is detected.

henseljahja commented 9 months ago

have you found solution for this one besides zooming out everytime? since the zooming out make the process stuck @giordy16

giordy16 commented 9 months ago

have you found solution for this one besides zooming out everytime? since the zooming out make the process stuck @giordy16

no

github-actions[bot] commented 7 months ago

This issue is stale because it has been open for 30 days with no activity.

TecHaxter commented 7 months ago

So I have narrowed down the issue to these points:

  1. Real-time face detection works in iOS when the InputImage has bgra8888 image format group, InputImage.fromBytes factory constructor is used (According to the example app)
  2. Face detection works fine with image selected from iOS Photos App, InputImage.fromFilePath factory constructor is used
  3. Face detection does not work with Image captured in iOS from camera plugin using the .takePicture function, InputImage.fromFilePath factory constructor is used
  4. Face detection works when the Image captured in iOS from camera plugin using the .takePicture function is saved to the Photos App in iOS and using the Photo App image path, InputImage.fromFilePath factory constructor is used

Overall this issue is related with File Path in iOS, maybe the library is not able to load the UIImage in Swift code when the image path is given from the captured image using camera plugin, but it is able to load the UIImage when the image path is given from Photos App

TecHaxter commented 7 months ago

@fbernaly can you remove the stale label from this issue and look into this issue?

fbernaly commented 7 months ago

@TecHaxter : I have removed the stale label, but I do not have bandwidth to work on this. Feel free to fork the repo and submit your contribution. I will review your PR ASAP and release a new version ASAP.

itzmail commented 6 months ago

try this

`Future getImageAndDetectFaces(XFile imageFile) async { try { if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); }

  List<Face> faces = await processPickedFile(imageFile);

  if (faces.isEmpty) {
    return false;
  }

  double screenWidth = MediaQuery.of(context).size.width;
  double screenHeight = MediaQuery.of(context).size.height;
  final radius = screenWidth * 0.35;
  Rect rectOverlay = Rect.fromLTRB(
    screenWidth / 2 - radius,
    screenHeight / 3.5 - radius,
    screenWidth / 2 + radius,
    screenHeight / 2.5 + radius,
  );

  for (Face face in faces) {
    final Rect boundingBox = face.boundingBox;
    if (boundingBox.bottom < rectOverlay.top ||
        boundingBox.top > rectOverlay.bottom ||
        boundingBox.right < rectOverlay.left ||
        boundingBox.left > rectOverlay.right) {
      return false;
    }
  }

  return true;
} catch (e) {
  return false;
}

}

processPickedFile(XFile pickedFile) async { final path = pickedFile.path;

InputImage inputImage;
if (Platform.isIOS) {
  final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
  if (iosImageProcessed == null) {
    return [];
  }
  inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
  inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List<Face> faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;

}

Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();

  final imglib.Image? capturedImage =
      imglib.decodeImage(await File(pickedFile.path).readAsBytes());

  if (capturedImage == null) {
    return null;
  }

  final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

  File imageToBeProcessed = await File('$path/$filename')
      .writeAsBytes(imglib.encodeJpg(orientedImage));

  return imageToBeProcessed;
}
return null;

}`

Dont forget import this

import 'package:image/image.dart' as imglib;

CoderJava commented 6 months ago

try this

`Future getImageAndDetectFaces(XFile imageFile) async { try { if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); }

  List<Face> faces = await processPickedFile(imageFile);

  if (faces.isEmpty) {
    return false;
  }

  double screenWidth = MediaQuery.of(context).size.width;
  double screenHeight = MediaQuery.of(context).size.height;
  final radius = screenWidth * 0.35;
  Rect rectOverlay = Rect.fromLTRB(
    screenWidth / 2 - radius,
    screenHeight / 3.5 - radius,
    screenWidth / 2 + radius,
    screenHeight / 2.5 + radius,
  );

  for (Face face in faces) {
    final Rect boundingBox = face.boundingBox;
    if (boundingBox.bottom < rectOverlay.top ||
        boundingBox.top > rectOverlay.bottom ||
        boundingBox.right < rectOverlay.left ||
        boundingBox.left > rectOverlay.right) {
      return false;
    }
  }

  return true;
} catch (e) {
  return false;
}

}

processPickedFile(XFile pickedFile) async { final path = pickedFile.path;

InputImage inputImage;
if (Platform.isIOS) {
  final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
  if (iosImageProcessed == null) {
    return [];
  }
  inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
  inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List<Face> faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;

}

Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();

  final imglib.Image? capturedImage =
      imglib.decodeImage(await File(pickedFile.path).readAsBytes());

  if (capturedImage == null) {
    return null;
  }

  final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

  File imageToBeProcessed = await File('$path/$filename')
      .writeAsBytes(imglib.encodeJpg(orientedImage));

  return imageToBeProcessed;
}
return null;

}`

Dont forget import this

import 'package:image/image.dart' as imglib;

Thanks. It's working to me.

tsukifell commented 3 months ago

try this

`Future getImageAndDetectFaces(XFile imageFile) async { try { if (Platform.isIOS) { await Future.delayed(const Duration(milliseconds: 1000)); }

  List<Face> faces = await processPickedFile(imageFile);

  if (faces.isEmpty) {
    return false;
  }

  double screenWidth = MediaQuery.of(context).size.width;
  double screenHeight = MediaQuery.of(context).size.height;
  final radius = screenWidth * 0.35;
  Rect rectOverlay = Rect.fromLTRB(
    screenWidth / 2 - radius,
    screenHeight / 3.5 - radius,
    screenWidth / 2 + radius,
    screenHeight / 2.5 + radius,
  );

  for (Face face in faces) {
    final Rect boundingBox = face.boundingBox;
    if (boundingBox.bottom < rectOverlay.top ||
        boundingBox.top > rectOverlay.bottom ||
        boundingBox.right < rectOverlay.left ||
        boundingBox.left > rectOverlay.right) {
      return false;
    }
  }

  return true;
} catch (e) {
  return false;
}

}

processPickedFile(XFile pickedFile) async { final path = pickedFile.path;

InputImage inputImage;
if (Platform.isIOS) {
  final File? iosImageProcessed = await bakeImageOrientation(pickedFile);
  if (iosImageProcessed == null) {
    return [];
  }
  inputImage = InputImage.fromFilePath(iosImageProcessed.path);
} else {
  inputImage = InputImage.fromFilePath(path);
}
print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List<Face> faces = await faceDetector.processImage(inputImage);
print('Found ${faces.length} faces for picked file');
return faces;

}

Future<File?> bakeImageOrientation(XFile pickedFile) async { if (Platform.isIOS) { final directory = await getApplicationDocumentsDirectory(); final path = directory.path; final filename = DateTime.now().millisecondsSinceEpoch.toString();

  final imglib.Image? capturedImage =
      imglib.decodeImage(await File(pickedFile.path).readAsBytes());

  if (capturedImage == null) {
    return null;
  }

  final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

  File imageToBeProcessed = await File('$path/$filename')
      .writeAsBytes(imglib.encodeJpg(orientedImage));

  return imageToBeProcessed;
}
return null;

}`

Dont forget import this

import 'package:image/image.dart' as imglib;

Thank you, this does work for me

husnain067 commented 2 months ago

Thank you, This works on the IOS @itzmail

InputImage inputImage; if (Platform.isIOS) { final File? iosImageProcessed = await bakeImageOrientation(pickedFile); if (iosImageProcessed == null) { return []; } inputImage = InputImage.fromFilePath(iosImageProcessed.path); } else { inputImage = InputImage.fromFilePath(path); } print('INPUT IMAGE PROCESSED: ${inputImage.filePath}');

List faces = await faceDetector.processImage(inputImage); print('Found ${faces.length} faces for picked file'); return faces; }

akwa-peter commented 1 month ago

Thanks @itzmail this worked like a magi, made it very smooth without stress to detect image

Future<File?> bakeImageOrientation(XFile pickedFile) async {
    if (Platform.isIOS) {
      final directory = await getApplicationDocumentsDirectory();
      final path = directory.path;
      final filename = DateTime.now().millisecondsSinceEpoch.toString();

      final imglib.Image? capturedImage =
      imglib.decodeImage(await File(pickedFile.path).readAsBytes());

      if (capturedImage == null) {
        return null;
      }

      final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

      File imageToBeProcessed = await File('$path/$filename')
          .writeAsBytes(imglib.encodeJpg(orientedImage));

      return imageToBeProcessed;
    }
    return null;
  }
roman-khattak commented 2 weeks ago

@giordy16 This solution worked for me. I'll leave the solution here in case someone else has the same issue! thanks a lot for mentioning this issue.

import 'dart:io';
import 'package:google_ml_kit/google_ml_kit.dart';
import 'package:image/image.dart' as imglib;
import 'package:image_picker/image_picker.dart';
import 'package:path_provider/path_provider.dart';

getImageAndDetectFaces() async {
    try {
      XFile imageFile = await _imagePicker.pickImage(
        source: ImageSource.camera,
        preferredCameraDevice: CameraDevice.front,
      );
      if (imageFile == null) return;      

      if (Platform.isIOS) {
        await Future.delayed(const Duration(milliseconds: 1000));
      }

      List<Face> faces =
          await processPickedFile(imageFile);

      if (faces.isEmpty) {        
        throw EvomException('Nenhum rosto foi detectado na imagem!');
      }
      return faces;
    } catch (e) {
      throw Exception('$e');
    }
}

processPickedFile(XFile pickedFile) async {
    final path = pickedFile?.path ?? null;
    if (path == null) {
      throw EvomException('Imagem não encontrada!');
    }
    InputImage inputImage;
    if (Platform.isIOS) {
      final File iosImageProcessed =
          await backImageOrientation(pickedFile);
      inputImage = InputImage.fromFilePath(iosImageProcessed.path);
    } else {
      inputImage = InputImage.fromFilePath(path);
    }
    print(
        'INPUT IMAGE PROCESSED: ${inputImage.filePath} - ${inputImage.imageType}');

    List<Face> faces = await _faceDetector.processImage(inputImage);
    print('Found ${_faces.length} faces for picked file');
    return faces;
}

bakeImageOrientation(XFile pickedFile) async {
    if (Platform.isIOS) {
      final directory = await getApplicationDocumentsDirectory();
      final path = directory.path;
      final filename = DateTime.now().millisecondsSinceEpoch.toString();

      final imglib.Image capturedImage =
          imglib.decodeImage(await File(pickedFile.path).readAsBytes());

      final imglib.Image orientedImage = imglib.bakeOrientation(capturedImage);

      final imageToBeProcessed = await File('$path/$filename')
          .writeAsBytes(imglib.encodeJpg(orientedImage));

      return imageToBeProcessed;
    }
    return null;
}

The above solution has been incredibly helpful in resolving the zoomed-in Front Camera selfie issue on iOS devices and improving overall face detection accuracy. I've implemented this approach with some modifications to fit my specific use case, and I wanted to share how it benefited my project in case it helps others:

  1. Zoomed-out Selfies: The bakeImageOrientation function effectively addressed the IOS front camera default zoom-in issue, resulting in properly framed selfies.

  2. Improved Face Detection: By ensuring correct image orientation, the face detection accuracy significantly improved for both selfies and uploaded images.

  3. Consistency in Image Processing: I applied the same image processing technique to both selfie capture and image uploads from the gallery. This consistency was crucial for our facial recognition feature, where we compare uploaded photos with the user's selfie to ensure user uploads authentic pictures.

Here's a snippet of how I achieved this:


   Future<XFile> _processIOSImage(XFile pickedFile) async {
     // ... [Same implementation of bakeImageOrientation] ...
   }

   // In selfie capture
   if (Platform.isIOS && byCamera) {
     file = await _processIOSImage(file);
   }

   // In image upload from gallery
   if (Platform.isIOS) {
     file = await _processIOSImage(file);
   }