A UIImageView
clone with a catch:
FaceImageView automatically scales its image to fill the bounds and keep any faces detected in view. This can be useful if you want to display people-based dynamic content with UIViewContentModeScaleToFill
but are having issues with it displaying the wrong part of the image, like the background, or someone's feet.
The class uses the CoreImage face detection APIs available in iOS 5.0 or later. This API works in many clear cases, but is not all that reliable. If no faces are detected in an image, FaceImageView defaults to centering the image, like UIImageView
's behavior with UIViewContentModeScaleToFill
.
A small demo app is included to try out different images and view sizes.
Installing
Add FaceImageView.{h,m} included in the demo project to your source, or, using CocoaPods, add the following to your Podfile:
pod 'FaceImageView'
Using
As simple as instantiating a FaceImageView object, giving it an image, and adding it to your view:
FaceImageView *imageView = [[FaceImageView alloc] initWithFrame:<CGRect>];
imageView.image = <UIImage>;
[self.view addSubview:imageView];
It may also be wise to set the image asynchronously, as face detection can sometimes take a second:
[imageView setImage:<UIImage> completion:^{}];
Watch faces automatically stay centered in the view as it changes size:
Posted for the Objective-C hackathon effort :)