eclipse-arrowhead / roadmap

Eclipse Public License 2.0
5 stars 9 forks source link

Token access policy standardization #13

Open ajoino opened 3 years ago

ajoino commented 3 years ago

The token access policy currently leaves the implementation up to the individual client library developers, which leads to interoperability issues when applications developed with different client libraries are used. We need to standardize how the token access policy works to enable interoperability.

Here are my ideas:

HTML

Request

The token should be sent in the Authorization header field, something like Authorization: Basic <TOKEN>, or maybe Authorization: ARTKN <TOKEN>.

Response

If token authorization fails on the provider side, the response should have error code 403 (see issue#215), with some standardized text, like Unauthorized: Token authorization failed: <case>, where the case might be invalid token or no longer valid to clarify to the consumer what it must do to consume the service.

CoAP

The tokens become very long, and CoAP is designed for short messages so I think we should discourage people from using the token access policy when using CoAP.

MQTT, OPC-UA, etc.

I do not know enough about these protocols to suggest any solutions.

tsvetlin commented 3 years ago

Further discussion is required, on how to recommend using the token between clients.

jerkerdelsing commented 3 years ago

@tsvetlin I suggest setting up a meeting between Szvetlin, Balint, Jacob, Jens, Aparajita and Emanuel.

emanuelpalm commented 3 years ago

We should likely support multiple kinds of tokens for different use cases.

The current token is a signed self-contained token loosely based on JWT. It contains an expiration timestamp, the name of the service it grants the consumption of, the names of the consumer and provider systems, and is signed by the authorization system, if I remember correctly. Validating it requires knowledge of the Authorization system's public key. Due to the amount of information it contains and being Base64 encoded, the tokens tend to be quite large (like well over 1kb if my memory is correct).

Secure Random Number Tokens

An alternative could be a secure random number token. The token itself contains no information. A 128-bit number is likely to provide enough security, especially if they expire in a couple of minutes, but 256-bit numbers could be used to satisfy the pedantic*. Validating it is a matter of requesting the authorization rule associated with it from some system (presumably the Authorization system). The rule object itself, which the token user (the consumer) never sees, has an expiry date and can, as a consequence, be cached by the consumed system (which retrieved it) for a couple of seconds (unless immediate token revocation is desired). If a provided token has no associated rule, it is invalid and access is not granted. If the rule does not name the service being consumed, it is invalid and access is not granted.

The secure random number token scheme is quite easy to make performant in practice, especially if reliable network transports exist between the devices that host the systems using them. It is simply a matter of keeping a key/value store mapping random numbers to capability objects, such as {"service":"service-discovery","provider":"service-registry","consumer":"my-consumer","expires-at":1620812787} ("expires-at" is the expiration time as a Unix timestamp). It may even be unnecessary to save such a key/value store to disk, which would improve performance significantly.

To make a local cloud more resilient to the token registry going offline, a scheme could be used where each new token is preemptively pushed to the system that should accept requests matching it. If a token is revoked, the system in question is immediately notified.

* Note that a pure random number from a high quality source is likely to have higher entropy than an equally sized hash value. I have, for example, seen claims that SHA-256 only provides 128 effective bits of security. See here for an explanation of why hash functions reduce entropy.

jerkerdelsing commented 3 years ago

Allowing multiple kinds of Tokens is in line with the Arrowhead interoperability strategy.

As an example of this Victor Kebande and Shailesh are looking into an approach where TRNG's can be used with in a local cloud for enabling updates of local cloud certificates on a much shorter time base than today.

emanuelpalm commented 2 years ago

@ajoino @jerkerdelsing How do we proceed with this? I suppose we are aware about the issue and will have it in mind as we keep refining Arrowhead in the future. Would some other, more direct, action also be desirable?

ajoino commented 2 years ago

There are two things being discussed in this issue: 1) Standardize where to find the tokens (the reason I opened this issue). 2) What tokens should be Arrowhead compliant.

I personally think that limiting what tokens can be used is both doable and desireable; since the tokens are generated by the authorization system we can be in full control of what tokens are considered Arrowhead compliant, and limiting the kinds of tokens that are compliant will reduce the burden on developers. I also think that we should define how the token should be sent between consumers and producers for different protocols, and in my original post I have a suggestion for how that can be achieved in HTML. Perhaps the first word in the Authorization header can be used to designate what kind of token is being used? Lastly, to me the token security thing should be completely abstracted away for a developer of an Arrowhead client, it's configuration and Arrowhead systems should not be designed to work differently depending on the access policy, that seems like bad design to me.

As to how we go forward, we need to write a document defining how the different access policies work, and the technical details necessary to for anyone developing their own client libraries to implement all access policies. And then we must declare that any system that doesn't at least follow those guidelines to be Arrowhead non-compliant. I do see how different organizations might wish to implement their own funky access policies and we shouldn't hinder them, but we must at least define the basic INSECURE/CERTIFICATE/TOKEN access policies, and in my opinion they should be fairly restrictive in what they allow.

And currently I'm a bit swamped with thesis work, but I'll have more time come December so maybe we should schedule a half-day to discuss this further @emanuelpalm @jerkerdelsing @tsvetlin ?

emanuelpalm commented 2 years ago

@ajoino I think the ambition is for it to be possible to use any kind of token within Arrowhead (and by that I mean that is must be possible to express whatever type of tokens is used in service registry entries and other places), even though our Authorization System may only support a couple. I would love to be in such a discussion. Half a day may be a stretch, but I could most likely show up for a 1,5 hours meeting.

ajoino commented 2 years ago

@emanuelpalm Half a day is indeed a stretch :)

It seems we agree on the general idea, how do we best go about scheduling a discussion?

emanuelpalm commented 2 years ago

How about you e-mailing me and then we have a discussion with @jerkerdelsing and co.?

ajoino commented 2 years ago

Ok, I'll email you in a couple of weeks, a little too busy with my thesis right now.

jerkerdelsing commented 2 years ago

@PerOlofsson documentation in Authorsation SysD.