Closed woodbe closed 4 years ago
I basically agree with your proposal but have some comments.
"1. Scan from a photo and reprint" can be removed. Most of research papers use photo or video that show face image directly captured from the target.
"3D printed mask/head" can also be removed. According to the https://www.wired.co.uk/article/hackers-trick-apple-iphone-x-face-id-3d-mask-security, The researchers concede, however, that their technique would require a detailed measurement or digital scan of a the face of the target iPhone's owner. That puts their spoofing method in the realm of highly targeted espionage, rather than the sort of run-of-the-mill hacking most iPhone X owners might face.
2D sensor test I think that 2D sensor would need to be tested against 2-8 (I assume that 1 would be removed from the toolbox) if there is no clear reason to skip 8. Quality of custom wearable mask can be improved in the future and 2D sensor could also be spoofed by such mask.
One comment about the 3D printed mask/head is that this was added originally by Stephanie as part of the update from Apple. I agree that it is a targeted attack, but I'm not sure that is really much different than the custom mask in that if you put that mask on your own face, if you don't have a close match how would it work (and the face matching has been shown to be vulnerable to close siblings or parent/child with close features, so I'm not sure just how hard the target needs to be).
@The-Fiona suggests adding additional methods for 3D, such as using a doll/mannequin/modeling clay to create something that could be used instead of only having a 3D printer.
This could be added either by a note to the 3D print (removing the restriction of printing) or by a new class, probably lower (7 or 8 in the current list) due to cheaper costs.
Follow up question is whether all cameras should be subject to one set of tests regardless of 2D or 3D, or if the sliding list should be used.
This may be handled by #20 in terms of defining the appropriate scoping based on the attack potential.
Closing as the questions here have been resolved in the current face toolbox
Looking through the various face attacks we have (in Face toolbox and 2D Face toolbox, I see for one, that we need to clean these up as they aren't exactly consistent.
But using this information, I come up with 9 different classes of tests (though I think we should remove the lowest class, but that is a different topic). This spectrum of tests covers all 2D and 3D scenarios as one complete set of tests, regardless of the sensor type. The specific tests would be determined by ranges along this spectrum.
2D tests
Now I'm not sure if 8 or 9 should be the top (I have no idea of the cost difference or actual collection requirements), but this is the list as I see it. There are individual tests within these classes, but these I think are the major classes of attacks we would be looking at.
I have marked tests 1-5 as 2D and tests 6-9 as 3D.
I would expect that a 2D sensor would need to be tested against 1-6 or 1-7 (depending on how you see the difficulty of 6 and 7 being).
I would expect that a 3D sensor would need to be tested against 4-9, so you get the high resolution 2D tests and then all the 3D tests.
So with this you have a single set of face tests, and specify that the vendor needs to lay out what type of sensor it is. If they don't specify what the sensor it, then all tests (i.e. all 9 classes) would need to be tested.
Having them split into 2 different test sets actually runs into the same problem as far as what set of tests the lab should run. If we say the lab doesn't know anything about the sensor, then they don't know which set of tests to run anyway, which is silly, even if we aren't providing a lot of internal details. The advantage of this method is that it means we have a single set of tests to maintain for face, and the face methodology specifies the requirements for the type of sensor.