navigator.mediaDevices.getUserMedia(constraints).then(stream => {
var msfd = new MediaStreamFaceDetector(stream, {fastMode: true, maxDetectedFaces: 1});
msfd.onfacedetected = function(event) {
for (const face of event.faces) {
console.log(`Face ${face.id} detected at (${face.x}, ${face.y}) with size ${face.width}x${face.height}`);
}
}
});
This looks cool but is more tracking than detection, which is the concern of this Spec. Perhaps we could throw it back at the wicg discuss as a separate but related idea?
Use Cases
Platform specific implementation notes
Android
Camera2 CaptureRequest Event-Driven Pipeline for Face Tracking
iOS
Tracking Faces in Video
Rough sketch of
MediaStreamFaceDetector
Usage