Closed ghost closed 7 years ago
This is a should, not a must. Servers that are reading a published configuration and using the list of keys to check the signature on a token they've received are going to have to set limits on how many keys they're willing to read and check, so keeping that list small maximizes the chances that your peer won't stop checking keys before it gets to a valid one. Some of this could be addressed by having a key id hint with a token, but I'm still not sure that publishing dozens of valid keys at a time is a good strategy. Directly published configuration allows easy key rotation, but one would hope in most cases that a cached copy of the information can be used so the latency of a server-side fetch can be avoided for user level operations.
Hi Brad, thank you for your response.
While I appreciate that this specific limit is only a recommendation, its presence may very well dictate the limit in the majority of implementations and therefore you may well see it as a must.
I think using a key ID hint is a nice solution and is how Open ID Connect gets around this problem. (Speaking of which why was JWT which supports nested encryption and signing not used for this ? I fully appreciate the reasons for wanting to avoid OAuth, but using JWT would allow existing libraries and standards to be leveraged.)
Regarding caching, one approach would be to cache valid public keys and then perform a direct lookup if not found. Negative lookups can also be cached for a short period to prevent abuse. This would improve the user experience in most cases, while handling the case of key rotations.
That caching behavior is pretty much how Facebook's implementation works. I plan to release a more comprehensive set of engineering and security documentation, but such considerations are not on the interoperability surface, so the details are not discussed in the core specification.
Are you aware of instances in practice where OpenID providers are using large numbers of keys in this way, or is this just a theoretical deployment concern?
Absolute simplicity and minimizing the number of external dependencies necessary to implement was a very important design principle throughout. Re: JWT; it was actually the implementation choice for an early prototype, but the complexity cost of handling all of the edge cases a general purpose library introduced turned out to actually be much greater than just writing a single pass parser for a fixed-format token. Library reuse isn't a fundamental good on its own, it is only a strategy to avoid complexity.
I also deliberately didn't want a format like JWT that allowed "accidental" extension to be possible, since any extra fields might change the properties of the protocol in undesirable ways. You can't strip them without breaking the signature, and you can't save them without perhaps needing to change your privacy policy. It also would require you to consider in your implementations what it might mean if JWTs from other contexts were replayed to delegated recovery endpoints.
The protocol was initially released with just one partner so that the possibility of introducing a new version for the token remains open, but the bar to convincing me that any new fields or formats will pay for their complexity cost in real world deployments is pretty high.
On Fri, Feb 3, 2017 at 1:43 AM Andrew Young notifications@github.com wrote:
Hi Brad, thank you for your response.
While I appreciate that this specific limit is only a recommendation, its presence may very well dictate the limit in the majority of implementations and therefore you may well see it as a must.
I think using a key ID hint is a nice solution and is how Open ID Connect gets around this problem. (Speaking of which why was JWT which supports nested encryption and signing not used for this ? I fully appreciate the reasons for wanting to avoid OAuth, but using JWT would allow existing libraries and standards to be leveraged.)
Regarding caching, one approach would be to cache valid public keys and then perform a direct lookup if not found. Negative lookups can also be cached for a short period to prevent abuse. This would improve the user experience in most cases, while handling the case of key rotations.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/facebookincubator/DelegatedRecovery/issues/3#issuecomment-277205342, or mute the thread https://github.com/notifications/unsubscribe-auth/ACFbcL0DmBrze9PeEHTMNMvUA4L-PRuJks5rYvbEgaJpZM4L1JKg .
I think the implementation specifics would be clearer once the example implementations are released and agree that the details are outside of the scope of the core specification documentation.
I'm not aware of any OpenID providers with large numbers of keys and even Google only currently have four listed: https://www.googleapis.com/oauth2/v3/certs
The key consideration is purely theoretical with the idea being to remove distribution of the private keys and therefore simplify key management - It was just a solution that sprung to mind as I read the spec so I'm certainly not wedded to it.
I think that some of the advantages of using a pre-existing format where possible would be that the pre-existing libraries are already tested and mature. Any encryption and signing algorithms are also mature and well defined. I can't argue with your arguments against JWTs, especially since you've already tried them in a prototype, but I'm not convinced that avoiding complexity is the only benefit of using pre-existing formats.
Closing for now, we can re-open if it comes up again. Thank you for the review and feedback.
It is recommended that up to two public keys should be published in the configuration endpoint, for the sole purpose of key rotation: "The Recovery Provider should not publish more than two keys; enabling key rotation with a small overlap period is the primary purpose of allowing more than one key to be published." (https://github.com/facebookincubator/DelegatedRecovery/blob/master/draft-hill-delegated-recovery.raw.txt#L631-L640)
I would propose that this limit is removed, for the purpose of high availability.
Consider a scenario where you have a single recovery provider server. At any given time, the server might keep track of a single active key pair and rotate this periodically. After rotation, the old public key should also be kept and published for a short period of time to cover any in-flight recoveries.
Now consider a scenario where you have two (Or more) load-balanced recovery provider servers (Without session affinity) with a shared database of some sorts. With the above limit, the servers would need to share the active primary key and coordinate key rotations.
For security purposes, it might be preferable for each server to have its own private key to prevent the keys from needing to be distributed between the servers. This also simplifies key rotation as each server would be responsible for its own key pair. With this strategy, only the public keys need to be distributed between the servers for the configuration endpoint. This would require up to two keys per server to be published on the configuration endpoint at any given time.