mastodon / fediverse_auxiliary_service_provider_specifications

Specifications for Fediverse Auxiliary Service Providers
27 stars 2 forks source link

Completely remove OAuth 2.0 #24

Open oneiros opened 2 weeks ago

oneiros commented 2 weeks ago

I had a long and very fruitful discussion with @ThisIsMissEm (thanks!) and she convinced me we should get rid of OAuth 2.0 altogether. A couple of arguments:

Overall, I am no longer convinced this is worth it. At the end of the day we gain very little from using OAuth.

Looking for alternatives, Emelia referred me to AWS's authentication, which is really interesting, but probably not worth copying 1:1 (see below). I also found this article, which helped at lot: https://www.latacora.com/blog/2018/06/12/a-childs-garden/

So, what would be the alternatives:

1. Just use an API-Key

This is super simple. Just send the secret key we already generate inside an HTTP header with every request. In theory this is a little less secure than the OAuth approach, but we already rely on TLS which should be sufficient to protect the key over the wire.

Pros:

Cons:

See #25 for a draft.

2. Custom Authorization header with an HMAC authenticating a timestamp

Here we would not send the secret key with every request, just an HMAC authenticating a timestamp. Both parties could validate that the timestamp was signed with the secret key, so this works for authentication. The timestamp itself can be used to invalidate the token after a short while to prevent replay attacks.

There is no standardized way to transport this exact information so we could define our own scheme for the HTTP Authorization header, that is meant to be extendable. This could look as follows:

Authorization: FASP-HMAC-SHA256 id=b2ks6vm8p23w, created=1728467285, signature=e2821f5113f2dbb7a331e2f7b0198a0fd35c419ea1dab65403e63443b3d61685

This is in parts quite similar to what AWS does, though they sign many more parts of the request.

Pros:

Cons:

See #26 for a draft.

3. HTTP Message Signatures

Ideas expressed in AWS's authentication scheme and elsewhere have found their way into RFC-9421, HTTP Message Signatures.

Sadly, this is a complex standard, trying to support many very different use cases. As such it only defines building blocks and actual applications need to specify which they want to use. But if I understand it correctly, a minimal implementation of it would be very similar to option 2. One could then build on that to get even more security, i.e. making requests more or less tamper-proof.

Pros:

Cons:

See #27 for a draft that also includes a little extra with regards to message integrity.

oneiros commented 2 weeks ago

Updated description with links to draft PRs documenting each of the three options.

ThisIsMissEm commented 2 weeks ago

I think the way forwards is HTTP Message Signatures, given, iirc, that's what we need eventually for authorized fetch (we're just stuck on an old version). I know some upgrade paths have been documented on SocialHub and with FEPs, so the consistency with other parts of the ecosystem is nice.

I would not do simple, too insecure, and I wouldn't do custom if at all possible. I'd use a profile of an existing standard

erincandescent commented 2 weeks ago

If you want something more secure than bearer tokens, your options are really either OAuth DPoP or HTTP Message Signatures. Both are quite complex specs.

I agree with @ThisIsMissEm that if (standardised) HTTP sigs are the future of S2S comms then that would probably be the best basis

p.s. ideally we'd move off of slow and huge RSA keys here.

oneiros commented 2 weeks ago

If you want something more secure than bearer tokens

But do we? I wonder if this is not good enough for our purposes today. Also I would really like to think that relying on HTTPS is sufficient for what we are doing (it seems to be sufficient for most OAuth implementations).

My main concern here is that ideally we would want to have a diverse ecosystem of different FASPs used by many different fediverse servers, all implemented using different languages and frameworks.

The more complex solutions all raise the barrier to entry and might harm adoption. They also make it harder to be fully interoperable. Last but not least: Complex implementations raise the risk of errors resulting in less security instead of more.

Saying something like this always carries the risk of sounding handwavy. And I really do not want to come off as doing away with legitimate concerns.

I just think we need to strike a balance here, and I have a hard time weighing the different options because I am not a security expert.

So I opened this as a place for discussion and I am very grateful for any and all feedback. (Thank you very much for mentioning DPoP btw, will read up on that next).

if (standardised) HTTP sigs are the future of S2S comms then that would probably be the best basis

While it is true that we, Mastodon, will have to implement RFC-9421 at some point and that others, who are currently blocked by us, will follow suit, I am not convinced that this is really the best thing to do here.

My biggest problem is that the standard is kind of complex and I was not able to find good existing library support for it in some major languages.

That means we would force most/all implementers to roll their own implementation of that standard which might not be ideal for the reasons stated above :man_shrugging:

p.s. ideally we'd move off of slow and huge RSA keys here.

Agreed. Also please note that so far I have shied away from proposing any kind of asymmetric crypto here. I have done a lot of reading these past days and the advice came up more than once that one should avoid asymmetric crypto if possible due to various footguns.

ThisIsMissEm commented 2 weeks ago

@oneiros could you list the footguns? afaik, asymmetric encryption is almost always better than symmetric encryption, though there can be cases where certain fixed payloads decrease the safety of encryption and as we're including a time component, that's unlikely to ever be the case, and the asymmetric keys would actually be providing safety because you'd not be communicating the signing key over the wire.

ThisIsMissEm commented 2 weeks ago

@oneiros also, yes, whilst OAuth does exchange codes and access tokens over HTTPS as it's primary security mechanism, there's certain other mitigations in place, e.g., PKCE and issuer identification for authorization codes. That said, OAuth tokens must (should) expire, and using short-lived access tokens and refresh tokens with token families mitigates the risk of interception.

In a case like FASPs, you've two parties that want to send messages to each other, sure, you could use a shared secret, but then if either part is compromised, that secret is completely burned for both parties. Using public/private keys, only the compromised server's keypair needs to be regenerated, and integrity is retained by the other party, because I can't fake a message to you simply by having captured a message or receiving a data dump.

oneiros commented 1 week ago

could you list the footguns?

No. I was just reiterating general advice I read in some places (the article linked above being one of them). My shaky understanding is that asymmetric crypto is always harder to implement and thus more error-prone.

That being said, if everyone is more comfortable with asymmetric crypto, I will not stand in the way.

My biggest problem is that the standard is kind of complex and I was not able to find good existing library support for it in some major languages.

I kind of changed my mind here a bit: It might very well be that we, Mastodon, will prioritize implementing RFC-9421 and as I mentioned above this might mean others will follow suit. This could very well lead to more interoperable implementations and certainly more experience with this RFC on the side of the fediverse software implementations. So the burden would remain for implementers of FASPs, but we hope to be able to address this via our reference provider.

With that in mind I made an update to #27. Please let me know what you think.