usnistgov / 800-63-3

Home to public development of NIST Special Publication 800-63-3: Digital Authentication Guidelines
https://pages.nist.gov/800-63-3/
Other
703 stars 102 forks source link

Broaden the use of biometrics to permit use as limited authenticators - USCIS #353

Closed amolkgupta closed 8 years ago

amolkgupta commented 8 years ago

Organization: USCIS (United States Citizenship and Immigration Services) Type: 1 (Federal) Document (63-3, 63A, 63B, or 63C): 63B Reference (Include section and paragraph number): 5.2.3 Comment (Include rationale for comment):

USCIS (United States Citizenship and Immigration Services) are pleased to provide these comments during the public preview of Special Publication 800-63-3. The Biometrics Division supports the integrity of the immigration system by providing quality data and biometric services. The Applied Technology Division supplies architectural and implementation assistance to technology projects within USCIS. We submit the following requests for changes with the goal of improving NIST's digital authentication guidelines.

These two comments request broadening the use of biometrics to permit their use as limited authenticators.

Comment 1. SP 800-63B should permit the use of biometrics as a limited authenticator

We request biometrics be defined as a permitted authenticator under Authenticator Assurance Level 1 and as a second-factor authenticator when paired with a Memorized Secret in Authenticator Assurance Level 2. In the current draft, the use of biometrics is permitted only as a second factor for use with multi-factor authenticators. Specifically, biometrics are permitted in the current draft as a second factor in MF OTP devices (5.1.5), MF cryptographic software (5.1.7), and MF cryptographic device (5.1.8).

The caution in the current draft for restricting the use of biometrics as an authenticator is understandable given the relative newness of biometrics technologies used to prove identity remotely. But we believe the concerns for restricting the permitted use of biometrics in section 5.2.3 are overstated and can be remedied with appropriate remediation measures rather than an outright ban on use.

Specifically, section 5.2.3 currently argues that:

"Biometric template protection schemes provide a method for revoking biometric credentials that are comparable to other authentication factors (e.g., PKI certificates and passwords). However, the availability of such solutions is limited, and standards for testing these methods are under development."

We agree that template revocation measures are immature. However, we believe the development of technology to protect biometric templates will increase rapidly and standards to test template revocation will be available in the near future. Rather than limit the use of biometrics as an authenticator because of these current shortcomings, the 800-63 guidelines should instead state that a biometric authenticator system SHALL have an effective means of template revocation before the authenticator may be used for AAL 1 or AAL 2. This type of restrictive wording would serve to satisfy the intent of ensuring a biometric authenticator has the necessary security precautions in place while helping to future-proof these NIST guidelines.

"Biometric characteristics do not constitute secrets. They can be obtained online or by taking a picture of someone with a camera phone (e.g. facial images) with or without their knowledge, lifted from through objects someone touches (e.g., latent fingerprints), or captured with high resolution images (e.g., iris patterns for blue eyes)."

We agree with a sentiment expressed by a MITRE Corp. commenter during the spring 2015 call for public comments on updating 800-63-2: that the fact that a biometric characteristic is not a secret misses the point of authenticators -- to provide confidence in a claimant's identity. Biometric authenticators are vulnerable to attack by techniques such as lifting latent fingerprints, taking a high-resolution facial photograph, or recording someone's voice. Equally, memorized secrets are vulnerable to attack by guessing, shoulder surfing, lifting from a sticky note stuck under a mousepad, or cracking weak password hashes from a stolen secrets file.

We believe that non-secret biometric authenticators can be implemented in ways that are more secure than Memorized Secrets. We also believe biometrics are inherently no more vulnerable to a successful attack than a Memorized Secret authenticator, especially since biometrics almost certainly will be tested using presentation attack detection technologies before being accepted. A stolen password "secret," however, likely will be accepted by most systems at face value. Yet memorized secrets are permitted in the current draft as a full authenticator for AAL 1 and as a second-factor authenticator for AAL 2. We therefore recommend that biometrics not be treated by these guidelines as an inferior authenticator just because they might not be secret.

"Biometric matching is probabilistic, whereas the other authentication factors are deterministic."

This statement is made as if it were a shortcoming of biometric systems rather than a strength. For instance, if I were to surreptitiously steal your username and password by shoulder surfing while you logged in, I would have a near-100% probability that I could fake this authenticator: I possess an exact copy. However, if I were to surreptitiously record your voice from a few feet away while you logged into your bank account using your voice on your phone, the verification software at your bank might easily reject my replay attack of your copied voice when I try to hack into your account because either the sound quality level does not meet the bank's threshold requirement, the verification algorithm itself senses that the voice and microphone seemed like they were too far apart to be a legitimate login, or both. Probabilistic matching of a voice biometric could be more secure than the deterministic matching of a memorized secret.

If biometric authenticators can be shown to be implementable in ways that are just as secure as a memorized secret authenticator, we believe the NIST guidelines should permit biometrics to be used at the same level as memorized secrets: as an authenticator for AAL 1 and as a second-factor authenticator for AAL 2. If NIST believes biometrics currently are too easy to falsify, then the guidelines should set a maximum false-match rate or similar guidance to ensure any biometric system used as an authenticator provides the necessary level of assurance.

The current draft recognizes this danger of authentication systems being fooled by fake biometrics or tampered sensors. Section 5.2.3 requires biometrics systems to demonstrate an equal error rate of 1 in 1,000 or better, and to demonstrate at least a 90% resistance to presentation attacks. We recommend a similar approach be taken for the permissible use of biometrics as an authenticator. For example, the guidelines could say that to use a biometric system as an authenticator for AAL 1 and AAL 2, the equal error rate must be better than one in (fill-in-the-blank) and that presentation attacks must be thwarted at a rate greater than (some number close to 100%).

We do not suggest at what level those rates are set. But there must be some level at which NIST believes a biometric system can provide an adequate level of assurance. Even if no biometric system in use today could meet the level NIST sets in these guidelines, a year from now there could be several products or techniques that do.

Comment 2. Ease hardware restrictions to support voice biometrics

The limits placed on the use of biometric authentication in section 5.2.3 of SP 800-63B seem focused on protections needed for local hardware biometric matching. We believe central, non-device verification of voice biometrics could be achieved at a high level of assurance without several of the limits imposed by the text in the current draft. In particular:

"When the biometric sensor and subsequent processing are not part of an integral unit that resists replacement of the sensor, the sensor SHALL demonstrate that it is a certified or qualified sensor meeting these requirements by authenticating itself to the processing element."

These restrictions seem appropriate for a fingerprint sensor where the biometric is verified locally on a device. However, if the biometric is a claimant's voice and biometric matching is performed by a central verifier, the limits defined by the above text place an unneeded burden on the authentication process. For instance, non-local verification of a voice biometric will not be impeded if an attacker is able to replace the hardware microphone or the analog-to-digital voice conversion software on the claimant's device, such as a telephone, tablet, or computer. If an attacker is able to replace the hardware and software on the remote authentication device and, for example, send a pre-digitized voice recording, he or she unlikely will gain any more advantage than a simpler attack of playing a prerecorded voice into the device's unaltered microphone.

This tamper-resistance requirement for biometric sensors seems not to apply to all forms of biometrics and thus should be applied only where appropriate. The specific wording in the current draft reduces the viability of using voice biometrics without any measurable gain in identity assurance.

"If matching is performed centrally: Use of the biometric SHALL be bound tightly to a single, specific device that is identified using approved cryptography."

This restriction also needlessly impedes the use of voice as a biometric for the same reasons given above. If a claimant presents a voice sample to a device that will be verified by a central server, allowing the claimant to use multiple devices will not expand the attack surface in ways not also permitted by, say, a memorized secret authenticator. For voice matching, the key is the voice sample to be matched. Whether that voice sample came from Device A today and Device B yesterday matters little in the assurance level of the identity.

We agree that restricting a claimant to using a single, never-changing device when presenting a biometric would increase the level of assurance because this restriction turns the single factor (the biometric "something you are") into a multi-factor authentication by requiring a second factor (the "something you have" being the device). But if biometrics can be used as an authenticator as requested by our Comment 1, the second factor could be provided by a memorized secret authenticator and not by a restricted piece of hardware.

"If matching is performed centrally: ... An authenticated protected channel between sensor and central verifier SHALL be established, and the sensor authenticated, prior to capturing the biometric sample from the claimant."

We believe this restriction also needlessly impedes the use of voice as a biometric for the same reasons given in the above two bullets. We believe that if a claimant establishes a protected channel with a central verifier, such as over an HTTPS connection, the fact that the microphone on the claimant's hardware device has not been authenticated will not diminish the security or assurance level of voice biometrics.

Thank you for considering these comments.

enewt commented 8 years ago

@amolkgupta - Thank you for your thoughtful comments. For the use cases you are concerned about, are they remote authentication applications? Or is the biometric sensor under some level of control or supervision?

wangxjnj commented 8 years ago

The fundamental problem of the biometric is the FMR. FNMR is just an UX issue. But FMR is a big security risk. What should be the rate 1%, 2% or ... ? How to actually verify that before deployment?Who should make the call the rate? Risk acceptance that some bad guys will be let in? Any CISO is willing to take the responsibility and signoff the risk acceptance?

Biomatric should only be used as one factor of the MF. Should not be used alone.

amolkgupta commented 8 years ago

@enewt - thanks for the response....some more thoughts below.

Our primary use case would be to verify a customer's identity using a remotely supplied voice biometric and comparing that remote voice to the customer's voice already on file. The voice biometric would be used as one authenticator to help verify our customer's identity. In this use case, we would have already enrolled the customer's biometrics and biographic data and identity-proofed him or her through the agency's in-person processes. During in-person identity-proofing, voice biometrics would be collected using a sensor under our control. During remote authentication, a sensor on a customer's device (e.g. computer, telephone, mobile phone, tablet) would be used to transmit the voice to a system under our control for central verification. Our system would perform a 1:1 comparison of the claimant's voice against the known customer's voice.

We also might explore other uses of voice biometrics, such as the reverse use case where we remotely record a claimant's voice, authenticated with no identity proofing, and later verify that the remote voice matches the customer's in-person voice when we identity-proof them and collect in-person biometrics.

rsfaron23 commented 8 years ago

I wonder would this latter use case work in a federated identity authentication process for persons who have also been proofed during other citizen engagements with trusted identity providers or federation RPs? It could be interesting to apply the remote capture, or the in- person capture voice biometrics to multiple USG customer use cases. It might be hard if there is not an authenticated IDP of a USG customer before they have gone through the identity proofing process.

paul-grassi commented 8 years ago

We will not allow biometrics to be a stand-alone single factor. In addition, an unverified/unauthenticated sensor will not be allowed for central biometric matching.