rFlex / SCRecorder

iOS camera engine with Vine-like tap to record, animatable filters, slow motion, segments editing
Apache License 2.0
3.06k stars 583 forks source link

How Can I Add Face Detector? #220

Open ToanNguyenCong opened 9 years ago

ToanNguyenCong commented 9 years ago

I want to put face detector by using CoreImage but cannot. Please tell me what I need to do?

felixchan commented 8 years ago

I'd like to know this too. How can we do this with SCFilter()

qingfeng commented 8 years ago

I'd like to know this too.

twomedia commented 8 years ago

I managed to achieve this recently in an iMessage app using SCRecorder.

I subclassed SCRecorder and overrode the captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection method detecting faces in each sample buffer:

- (void)setupCIDetector
{
    faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace
                                      context:nil
                                      options:@{CIDetectorAccuracy:CIDetectorAccuracyLow}];
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{
    if (!faceDetector) {
        [self setupCIDetector];
    }

    CVPixelBufferRef pixel_buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault,
                                                                sampleBuffer,
                                                                kCMAttachmentMode_ShouldPropagate);

    CIImage *ci_image = [[CIImage alloc] initWithCVPixelBuffer:pixel_buffer
                                                       options:(__bridge NSDictionary *)attachments];
    (attachments)?CFRelease(attachments):nil;

    NSArray *features = [faceDetector featuresInImage:ci_image
                                              options:nil];

    [super captureOutput:captureOutput didOutputSampleBuffer:sampleBuffer fromConnection:connection];
}

Now that we've detected faces, you need to update the on screen props to be displayed in the correct position. Using the above code you can position props in real time, but you will need to implement something else for the export. In my case I was recording video and exporting as a GIF, In this case I used CIDetector on each frame of the GIF export to calculate a correct position for the props then I used UIGraphics to draw the overlay view onto the image.

Hope this helps! It is definitely possible, even with the use of 3D objects! Here's a video of a gif iMessage sticker generated from my app: https://sendvid.com/ww2wi55h

EDIT: Since the video is no longer available, heres the iMessage app I built using the code above: https://itunes.apple.com/us/app/sticker-booth-animated-gif/id1157522905?mt=8

vexelgray commented 7 years ago

Great Work @twomedia !

Could you upload a demo to understand how to track 3d objects in the face?

It would be great!

Regards

AtoHenok commented 7 years ago

@rFlex @twomedia @vexelgray How do I access the features Array from the ViewController that is displaying my CameraView. So far I have created subclass of SCRecorder called Detector that does what you have done above. Then I created instance of it as let myDetector = Detector() print(myDetector.allFeatures.count) but the size of this array and it's 0.

` import UIKit import CoreImage import SCRecorder

class Detector: SCRecorder {

var faceDetector = CIDetector()

var allFeatures = [CIFeature]()

func setupCIDetector(){
    faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])!
}

override func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {

    self.setupCIDetector()

    let pixel_buffer = CMSampleBufferGetImageBuffer(sampleBuffer)

    let attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate)

    if #available(iOS 9.0, *) {
        let ci_Image = CIImage(cvImageBuffer: pixel_buffer!, options: attachments as! [String : Any]?)

        allFeatures = self.faceDetector.features(in: ci_Image)
        print("😩: \(allFeatures.count)")

    } else {
        // Fallback on earlier versions
    }

    super.captureOutput(captureOutput, didOutputSampleBuffer: sampleBuffer, from: connection)

}

} `