Jarecki et al. have an upcoming paper at PKCS2021 that analyzes security properties of blinding mechanisms used in the 2HashDH construction. Recall, for the purposes of this issue, there are two types of blinding (using group notation to align with this specification, whereas their paper uses exponential notaton):
Multiplicative:
C -> S: a = rH’(x), for random scalar r
S -> C: b = ak, for server private key k
C: output H(x, br^(-1)) = H(x, kH’(x))
Additive:
C -> S: a = H’(x) + rG, for random scalar r and fixed generator G
S -> C: (b = ak, z = kG), for server private key k
C: output H(x, b - zr) = H(x, H’(x))
A generalized summary of their results, without accounting for particular application properties, is as follows.
Multiplicative blinding is safe.
Additive blinding is possibly unsafe, unless:
The client has a certified copy of the server public key. This always applies in the verifiable mode by necessity. There may be cases where the client has a certified copy of the key in the base mode.
The client input is high entropy, e.g., in the case of Privacy Pass, 32 random bytes of data.
The client mixes the public key z = kG into the OPRF evaluation, e.g., by computing H(x, z, kH’(x)) instead of H(x, kH’(x)).
The fundamental problem is as follows: additive blinding with a maliciously created z, i.e., one different than kG, gives the attacker a way of testing one input per OPRF interaction. Applications wherein this is possible regardless of the blinding mechanism, e.g., OPAQUE, are not affected (additive blinding is OK). However, for applications where this attacker capability does not otherwise exist, additive introduces a real weakness.
All in all, this means the choice of blinding mechanism has security implications. The draft should ideally offer a sane default for applications that know what they’re doing (see similar text in hash-to-curve), with options and guidance for applications in doing something different if their circumstances warrant it. In considering these defaults, there are a number of options on the table, accounting for code reuse, performance, and bandwidth.
Here’s what I propose we do to address this issue.
First, refactor the document slightly to permit different types of Blind and Unblind implementations. For example, Blind might be implemented using multiplicative, additive, or both types of blinding. While doing this, require that additive blinding always include the server public key in the Unblind output so that it’s folded into the Finalize computation. This would promote both types of blindings to the main part of the document, whereas additive blinding is currently specified in an appendix.
Second, map each mode to a particular implementation in the following way:
Verified mode uses additive blinding, by default, since it is strictly a performance improvement and the client is expected to have a certified copy of the public key.
Weak verifiable mode (see issue #225 uses both additive and multiplicative.
Base mode uses multiplicative blinding, by default, as it does not require the server public key. Elsewhere, perhaps in an appendix, we can clarify that applications can use additive blinding only if the server public key is available or if the client input is high entropy.
Jarecki et al. have an upcoming paper at PKCS2021 that analyzes security properties of blinding mechanisms used in the 2HashDH construction. Recall, for the purposes of this issue, there are two types of blinding (using group notation to align with this specification, whereas their paper uses exponential notaton):
A generalized summary of their results, without accounting for particular application properties, is as follows.
The fundamental problem is as follows: additive blinding with a maliciously created z, i.e., one different than kG, gives the attacker a way of testing one input per OPRF interaction. Applications wherein this is possible regardless of the blinding mechanism, e.g., OPAQUE, are not affected (additive blinding is OK). However, for applications where this attacker capability does not otherwise exist, additive introduces a real weakness.
All in all, this means the choice of blinding mechanism has security implications. The draft should ideally offer a sane default for applications that know what they’re doing (see similar text in hash-to-curve), with options and guidance for applications in doing something different if their circumstances warrant it. In considering these defaults, there are a number of options on the table, accounting for code reuse, performance, and bandwidth.
Here’s what I propose we do to address this issue.
First, refactor the document slightly to permit different types of Blind and Unblind implementations. For example, Blind might be implemented using multiplicative, additive, or both types of blinding. While doing this, require that additive blinding always include the server public key in the Unblind output so that it’s folded into the Finalize computation. This would promote both types of blindings to the main part of the document, whereas additive blinding is currently specified in an appendix.
Second, map each mode to a particular implementation in the following way: