Open rramesh2000 opened 4 years ago
The modules, in short, check for a couple of patterns that resemble something like this particular picture.
If you were to put a picture of another person and hold it in front of a camera, then the API would recognize that said picture as a face, because it matches the aforementioned patterns.
Here's a very nice representation of the scenario that I'm talking about, which I just made in Microsoft Paint for the sake of clarity:
I get that but how can I make sure it's a real person and not a photo or picture
you have to add liveness check in to your code in order to detect spoof attack.
OR
OR
@alwayslivid That is a very good picture. A+
This is a good thread.
The problem is not limited to flat 2D photos. There are demos how the iPhone Face Recognition can be tricked with a 3D mask of a person.
Its the sort of cat and mouse game that never ends between security systems and hackers. One potential solution that will require significant thought and effort is to enable federated learning. Enable users from around the world to share examples of images that break the latest face detection models and contribute to an improved next version. This type of approach (previously referred in AI as group filtering) has been successfully used for email spam by Google and other companies with large resources. Federated learning could potentially distribute the effort among peers.
the simplest (or maybe silly) way for liveness check is eye-blink detection. the process is as follow
Is it possible to differentiate between a 2 dimensional face and a real face ?