Open wonmor opened 4 months ago
Never mind — you're supposed to add connection.videoOrientation = AVCaptureVideoOrientation.portrait
to the OTHER captureOutput NOT the one I indicated above. Sorry.
ANOTHER UPDATE - just replace the whole captureOutput to the following:
// MARK: AVCaptureVideoDataOutputSampleBufferDelegate
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
connection.videoOrientation = AVCaptureVideoOrientation.portrait
if !currentMetadata.isEmpty {
let boundsArray = currentMetadata
.compactMap { $0 as? AVMetadataFaceObject }
.map { NSValue(cgRect: $0.bounds) }
wrapper?.doWork(on: sampleBuffer, inRects: boundsArray)
}
layer.enqueue(sampleBuffer)
}
You also have to tinker with changing the .map part to .map { NSValue(cgRect: $0.bounds) }
.
I managed to fix the issue in the latest version of the codebase where this tutorial does not work; just 4 years after. What a long time it took me solve this. Jokes aside, it actually took a decent time to figure this out:
If you go through the instructions provided in https://github.com/zweigraf/face-landmarking-ios/issues/5 You'll be able to quickly figure out that there's no convertScaleCGRect function in the recent version of this code base.
That's because the author has pushed a "simplified version" of DlibWrapper class, so I had to go through the previous commit history and I found it (the one in May 2016).
First and foremost, replace the entirety of your DlibWrapper.mm file to the following:
I applied the changes made in the comments in issue #5 so that now it supports portrait mode.
Next change you need to make is... Go to SessionHandler and locate the following:
BE AWARE! There are TWO captureOutput(s) — you must choose the one which has NO code inside of the function block except for a simple print line. Then you wanna add the following inside the function:
connection.videoOrientation = AVCaptureVideoOrientation.portrait
So the final version of SessionHandler's captureOutput function will look like the following:
BOOM. All issues have been resolved. OH BY THE WAY, add
session.sessionPreset = AVCaptureSession.Preset.vga640x480
RIGHT BEFORE thesession.startRunning()
line in ViewController.swift to enable legacy style video streaming (640x480 instead of 1024 something dimensions) so that there's LESS noise and instability in the landmark data. Lower resolution helps because it lowers the demand for the machine to handle.Hope that helps, I know maybe I'm a little too late now that the Vision framework/ARKit framework is out, but in the case you're writing C++ code in tandem with Swift and want to import these stuff from C++ side too using Objective-C++ — this is a tutorial for you!
John Seong