Closed woodbe closed 5 years ago
On further thought, I think I would move the "base" tests to 10-19 (that or explicitly give 2 digit numbers like 01, 02, etc) to make it consistent. That said, I think the 10, 20, 30 would work well.
Actually, maybe it should be "Head Toolbox". (I know, I keep making comments as this keeps turning in my mind during the review.)
I usually like to look at toolboxes from the perspective of their purpose. With other words: if a developer has a system working with a certain bio characteristic, I like if I just have one toolbox to test this. This would mean one toolbox for face, one for 3D face, ... What I would support however, is that we should further integrate and combine the framework of all the toolboxes. All toolboxes need a common introduction and methodology as well as a common structure. And from there, the difference whether you see the toolboxes as one or two, is really minor.
I get your point, but I'm also looking at all the face and eye and seeing the exact same thing over and over again, making long term support a problem. If someone comes back with an issue that say a new type of printer should be added for some new paper that came out, it is likely to be relevant to all these tests, and need to be added to all of them. I would prefer to have to update that once rather than three times and make sure the changes are all correct.
I guess what I'm going for is "Toolbox -Modules" (would that be TB-Modules?) for the common parts like everything around the face/eye area.
I completely agree with updating the overview doc further, I was looking at that next (and hence the questions about PAI numbers).
@woodbe Does your "Camera Toolbox" use visible light to capture the face or iris image? Do you think that evaluator should use "Camera toolbox" that use visible light for presentation attacks to the iris recognition? or "Camera toolbox" doesn't specify the type of light?
Mobile 2D face recognition captures the face image with visible light but iris recognition use NIR light to capture the eye image. I heard that if the mobile device use the NIR light for iris recognition, PAI created with near-infrared should have higher chances to support a successful presentation attack than PAI created with visible-light images so most of research papers on iris presentation attack use NIR light to create the PAIs. The same is true for the 2D face recognition. All PAIs created in the research papers use visible light to capture the target user's face image.
I agree to take modular approach so that toolboxes can share the same module as much as possible rather than creating the same tests in different toolboxes. But I want to know your proposal in more detail.
Very good comment from @n-kai - besides the basic PAI, one should consider PAIs that are dedicated to the camera technology. Would be the same question for 3D fingerprint vs 2D fingerprint vs vein recognition camera-based capture. Overall, there should be basic / common models for one type of sensor (image-based vs electric signal-based for instance) where almost the same variants can be used for one type whatever the biometric modality and sensor technology is and then it should be amended with sensor specific PAIs.
@n-kai so I was thinking about that, but hadn't necessarily gone that far in terms of really thinking it out. My thoughts had been that when I was writing the iris I had a lot of commonality, even between the two light spectrums, that it was really just choosing which one you used that was the difference. Obviously there were some tests that didn't work in the NIR (things there the output would have been on a screen), but that otherwise these were really the same test.
My thought about this really was as @BrJu is saying where there would be a common set of tests for a type of sensor, regardless of what it is actually trying to capture. I think the vein, for example, seemed to use NIR as does the iris tech (at least the ones I'm familiar with), so those could probably use similar tests (if not identical), with the difference being that I'm getting the eye vs hand/wrist as the capture point. But even then, since it is a light-based camera (just different wavelengths), even there it would largely be common with the ones for face (mainly thinking 2D here, not 3D, though even with 3D it would seem like you would need to check the 2D to make sure the 3D sensor actually uses 3D).
So I don't know if this is really the right way to go. I was thinking about it while going through the toolboxes converting them to adoc and seeing how much they are the same. As of right now we have 5 separate toolboxes (I'm counting the fingerprint we are still waiting for in that number), but all the ones we have been working on directly are all camera-based in some way. Long term, I was thinking that the maintenance of 5 separate toolboxes (which assumes no new modalities are added), is going to be a lot of work, and that if we took a little time to compare and contrast the existing toolboxes we have, it seemed like we could create some sort of type-based toolbox that would provide a LOT of commonality and reuse, with must less maintenance.
I think this would be what I would think we would get out of this (an idea about how to approach offhand):
Additional toolbox types would be created (for example, while fingerprint sensors in many cases are image-based, I don't know if I would place them in the camera toolbox because they are usually contact-based instead of from a distance (I have seen an image-based fingerprint setup demoed before that used a smartphone camera, and that I would put in the camera toolbox, but in general, I would probably create some sort of "touch-based toolbox" for things like fingerprint and anything else that required physical contact with the sensor).
Based on 5/16 call, will look into asking BSI to release face/finger toolboxes for use here. Will follow-up after the AVA_VAN direction.
Closing as these will be kept separate to handle the differences between them.
After reviewing the face toolbox submitted by @stemotre, I see a lot of overlap with the eye toolbox. While there are some specific cases in each (I don't think anyone is planning to 3D print an iris, and contact lenses on a face mask would be unnecessary), it seems like most of this is very common.
My proposal would be to combine these into a single set with the unique cases getting individual numbering. For example, we have been naming tests in an X-Y manner, and we would say that X between 1-10 is reserved for common tests, X between 20-29 are additional for Face-specific tests, and X between 30-39 are for Eye-specific tests.
What I'm really getting to here is that this is really "Camera Toolbox" that could encompass anything along these lines. May main thing here is that for the most part, the only difference between the majority of these tests is whether or not the sensor is looking at the eye or the whole face, which is really about the target, but the PAI being created is the same.