Closed anderspitman closed 7 months ago
Hey, yes sure.
if I had known about Rauthy at the time
Yes that is often a problem, that there might be interesting projects that are rather hard to find. Tbh, I am not a guy that goes out into social media and stuff for announcements and so on, so it might even be a bit harder to find rauthy.
I am moving away from google, so I would post my answer here instead of adding to the spreadsheet.
I read in your readme, what you mean with that. And this would be more a client than a server thing, so nothing Rauthy does or would implement. This could be done easily on a clients callback site, or something like an "auth proxy", which just has a list of redirects.
However, depending on the client, you would be logged in directly, because you will get a Session for Rauthy itself once you logged in successfully. This means if I log in to Client A now and in 10 min, I want to log in to Client B, I will see the login page from Rauthy for a short moment, but will be logged in without interaction. This depends however on the setup and config, and if then client forces a fresh login with the prompt
param.
Yes, there is a pretty big one with dedicated API Keys and fine grained access control. It is fully documented with OpenAPI / Swagger as well:
You can created fine-grained API keys with optional key expiry for almost all of the actions, apart from a few ones that were excluded explicizly, like creating new API keys (prevent privilege escalation if a key has been leaked somewhere) and edit Upstream Auth Providers:
Is not implemented, but would be a nice feature though. You could already use Rauthy for instance with the Traefik ForwardAuth middleware and it would basically only require Rauthy to add some headers there.
I guess I will just add this feature to the TODO list, since it would be implemented pretty quickly anyway.
No support here and it never will. Even though these are used in big enterprise still a lot, I think OIDC is simply better in every way.
What exactly do you mean with that? Like pass through each single request through Rauthy, make sure authentication is valid, and forward to the final destination? Because afaik there is really only oauth2 proxy (which you listed as well), which does this. The others support things like the above mentioned ForwardAuth
but don't really proxy the request itself, but lease let me know if I am wrong.
Btw a really cool tool for collecting code lines is tokei -> cargo install tokei
Edit:
Sorry I only now saw the extra Forward auth
entry in your comparison. I did not have my coffee yet. :)
Also, passwordless email login in the way of "sending a magic link and be logged in without anything else" is not supported by Rauthy.
With #339 you can consider the trusted auth headers to be implemented, just not released yet. :)
just not released yet
... by now it has been released with v0.22.1
I just found rauthy too, and it looks awesome!
My question would be regarding this:
SAML + LDAP No support here and it never will. Even though these are used in big enterprise still a lot, I think OIDC is simply better in every way.
Does this refer to LDAP for authentication? OIDC definitely is much better, but LDAP is more than only authentication. LDAP is a directory of users, groups, and - in our case - quite a bit more data.
Authentication is only one part, another is authorization and on- and offboarding. With rauthy managing users and groups, how do these end up in the applications? Are there plans to implement SCIM? How would users be removed/disabled in applications when removed in rauthy?
Would adding LDAP as a backend for what users and groups exist (not passwords etc.) an option? Other applications would sync with LDAP, but authentication would happen with OIDC and rauthy, including MFA, passkeys, etc.
I just found rauthy too, and it looks awesome!
Thanks! ;)
LDAP is a directory of users, groups, and - in our case - quite a bit more data.
Rauthy can handle all of that too.
You manage everything, all of your mappings, roles, groups, whatever, from inside the Admin UI (or via API key, if you like).
Rauthy can map:
For instance, I can create roles and groups (in addition to the OIDC default ones) and then map them to a user like this:
You can then create fully customized values for each user as well, for instance
Which can then be set as a key / value entry for each single user
Then you can create custom scopes, so that only the clients you allow will actually get access to all of these values and mappings and you can decide, which of these custom attributes you want to link / map to a scope, if any at all (maybe your downstream app only cares about the scope itself being present)
... and finally map / allow scopes and mappings for each client independently
How would users be removed/disabled in applications when removed in rauthy?
Depends on your downstream app, but usually fully automatically. On login, a user gets redirected to Rauthys Login screen and if it does not exist anymore or has been disabled, you would simply not get a token for this user. Additionally, when your downstream app checks the tokens after login for updates on roles / groups (like it always should), all mappings will be forwarded and distributed right away.
All information about the user would be included in the signed id_token
after a successful login, so you don't need any other external sync's and updates at all. You only care about the settings on Rauthy's side and that's it.
Are there plans to implement SCIM?
No, you simply don't need it with OIDC, because all of the information an application needs (and is actually allowed to see) will be included in the id_token
upon a successful login.
Would adding LDAP as a backend for what users and groups exist (not passwords etc.) an option?
I never thought about this. I never had a use case or question regarding this so far.
I just don't see a reason why someone would want that, if you can do all the mapping and setup with OIDC directly without external dependencies, but I am no LDAP prof and I have never really used it at a bigger scale.
Would adding LDAP as a backend for what users and groups exist be an option?
If LDAP is a hard requirement (e.g. in order to accommodate a legacy system) you’re probably better off with Kanidm by @Firstyear as it has an LDAP interface to support that exact use case.
Thanks for the attribute mapping guide. I tried that before, and the plus button of the scopes didn't appear, but now it did work.
That isn't exactly the kind of data I meant, though. My LDAP contains users accounts, groups, (and other records) that, for example, are used by our self-hosted mail servers too (postfix/dovecot). User records have many email addresses beside a primary address, groups can have addresses and work like email aliases. It is a shared hierarchical database, a directory service, accessed by different tools, not just a password store. Groups can be hierarchical, include other groups, and be mapping into a flat memberOf list on users using ppolicy overlays.
You could even store UNIX users in LDAP, with UID/GID, home directory, storage quotas, so that they can log in on all machines. Usually, the actual login would be through Kerberos. This is still common in many universities or high-performance computing labs.
So, LDAP stores a lot more than users and passwords. To keep all that data in one place, it would be nice to have LDAP as a backend, not for authorization, but for users, groups, and attributes. It would be wonderful to map LDAP attributes to user and IAM attributes, e.g. to use them in OIDC profiles (could be text config file). Still being able to sync password changes back to LDAP for legacy applications would be a plus.
If LDAP is a hard requirement (e.g. in order to accommodate a legacy system) you’re probably better off with Kanidm by @Firstyear as it has an LDAP interface to support that exact use case.
This is an LDAP interface for authentication, based on the users and groups stored in kanim. That the other way then asked and can be covered with OIDC.
The question still is, how do you expect user on- and offboarding with rauthy to happen? Deleting a user in rauthy will not remove them from any application where they signed in. If you add a user to a group, how will that be synced to an application using that group?
All information about the user would be included in the signed id_token after a successful login, so you don't need any other external sync's and updates at all. You only care about the settings on Rauthy's side and that's it.
That is the primary problem with SSO systems. Until a user signed in to a system, they will not be present there. You wouldn't be able to assign them a ticket before they signed in to the ticket system.
Worse, they can never be offboarded from a system through a login. Accounts in these applications would stay active forever. If you run a GitLab with SSO, even after removing the user from rauthy, their GitLab account would stay active, and they could access repos through SSH keys. If you have any tool with per-seat payment, you would have to continue to pay for "removed" users. Any application that sends notification emails (like comments on a ticket) would continue sending that to "removed" users.
If you change groups for a user, these won't change in any application until the user logs in again. Sometimes that could be elevated with short session timeouts in applications, but it might not. Removing a user from groups (restricting access) won't actually restrict access until the user logs in again. If they are still signed on to the application, they can still access all resources.
You will need either LDAP or SCIM to actively propagate changes. With LDAP, applications regularly sync users and group mappings. This is a more legacy approach that can be tricky with not syncing users to applications they actually have no access to. As far as I know, SCIM was invented by OIDC IAM providers, such as Okta, to push events to downstream applications. The moment you remove a user from a group, the IAM will send HTTP requests to each downstream application so that they can create, update, or remove the user. That would be independent of whenever the user would log in again. And it would be for user creation and deletion. For example, GitLab would deactivate the user, block SSH keys, and stop sending notification emails.
Please excuse that long message, but rauthy would be a wonderful solution for all my home, club, and some business use cases, if it could do that. Other solutions I've found either can't do user lifecycle or read data from LDAP, or are huge JVM applications with limited OIDC support and no WebAuthn/passkeys.
Okay, I got what your problem is.
There are ways to solve a few of the problems with just OIDC (offline tokens for instance to sync user data), but the downstream application has to support this.
I never used SCIM so far, so I have no idea how much work it would be to implement it, but I don't think that Rauthy will support LDAP sync as long as not someone else provides all the PRs for this. I simply don't know enough about LDAP and I have no use case for this, so for me all these many hours of work (probably?) would not provide any benefit.
I thought about providing an LDAP interface in the very beginning, only to find out that there simply were no crates that supported it (back then, haven't checked in a while), so I decided to simply leave it out, because it would be too much work and I personally don't use / need it at all.
SCIM though sounds reasonable, but I would need to check this and dig deeper into the topic to give a definite answer. This would be quite a bit of additional work while I don't even use / need it right now (maybe in the future).
Thanks for the attribute mapping guide. I tried that before, and the plus button of the scopes didn't appear, but now it did work.
This only appears for custom scopes and when custom attributes exist, so you can't mess up OIDC default scopes by accident.
The question still is, how do you expect user on- and offboarding with rauthy to happen? Deleting a user in rauthy will not remove them from any application where they signed in.
Correct, they would simply not get removed automatically.
If you add a user to a group, how will that be synced to an application using that group?
This though will happen automatically with each new token.
... and they could access repos through SSH keys
which is why I use SSH certificates provided from another application, but yes, correct.
Edit:
Auto-Removing users can be done via an offline token. These are valid for long periods of time and will return data from the /userinfo
even when the user has been logged out on the IdP. I need to "reactivate" them in Rauthy again. They have been deactivated in an earlier migration and I simply need to check the offline scope and issue long lived tokens. This is a small thing though to bring back.
Apart from this offline synching, I don't see any real benefit of offline tokens, which is why I did not care about them too much. Short lived tokens and sessions are more secure anyway and logging in via passkeys is done in seconds.
Since this is off-topic and a bit more, we should continue in the discussions and not this issue though :)
Thanks @sebadob; very helpful!
I am moving away from google, so I would post my answer here instead of adding to the spreadsheet.
Fully support that
Multi-domain auth
I read in your readme, what you mean with that. And this would be more a client than a server thing, so nothing Rauthy does or would implement. This could be done easily on a clients callback site, or something like an "auth proxy", which just has a list of redirects. However, depending on the client, you would be logged in directly, because you will get a Session for Rauthy itself once you logged in successfully. This means if I log in to Client A now and in 10 min, I want to log in to Client B, I will see the login page from Rauthy for a short moment, but will be logged in without interaction. This depends however on the setup and config, and if then client forces a fresh login with the
prompt
param.
I'm not sure how you would do what I need purely on the client side. Say I have app1.domain1.com and app2.domain2.com. My auth server is running at auth.domain1.com. Once I have a session there, it sets a cookie for *.domain1.com and uses trusted header auth, so any apps on that domain are implicitly authenticated without needing to do any OIDC flow explicitly. Those apps don't even need to implement OIDC, which is the beauty of trusted header auth.
However, app2.domain2.com and any other apps on *.domain2.com would each have to implement OIDC and perform individual flows.
What I mean by multi domain auth is that I would also expose the same auth server instance at auth.domain2.com, and perform a redirect whenever the user authenticates at auth.domain1.com with a special token that would automatically create a session on auth.domain2.com as well. That way you can host apps on as many domains as you want and none of them have to implement OIDC directly. Note that this still isn't implemented in obligator yet.
Standalone reverse proxy
What exactly do you mean with that? Like pass through each single request through Rauthy, make sure authentication is valid, and forward to the final destination? Because afaik there is really only oauth2 proxy (which you listed as well), which does this. The others support things like the above mentioned
ForwardAuth
but don't really proxy the request itself, but lease let me know if I am wrong.
Yeah that's basically the idea, and I'm pretty sure several of the options in the table support it. If not there's a lot of incorrect information in the table.
Btw a really cool tool for collecting code lines is tokei ->
cargo install tokei
Thanks I'll check it out
Also, passwordless email login in the way of "sending a magic link and be logged in without anything else" is not supported by Rauthy.
Aw good to know. Can you clarify what is meant by "True passwordless accounts with E-Mail + Magic Link + Passkey" in the feature list? Maybe you can get a magic link but have to associate it with a passkey immediately?
Would adding LDAP as a backend for what users and groups exist be an option?
If LDAP is a hard requirement (e.g. in order to accommodate a legacy system) you’re probably better off with Kanidm by @Firstyear as it has an LDAP interface to support that exact use case.
@erlend-sh since you mentioned Kanidm, any chance you could take a look at it's new entry in the spreadsheet and let me know if I have anything wrong?
@sebadob one other question: are you interested in more attention for Rauthy? I think if it was more widely known it would likely become quite popular. Personally I've learned I don't always want more attention for my open source projects haha, so figured I'd ask before I start recommending it to people.
What I mean by multi domain auth is that I would also expose the same auth server instance at auth.domain2.com, and perform a redirect whenever the user authenticates at auth.domain1.com with a special token that would automatically create a session on auth.domain2.com as well. That way you can host apps on as many domains as you want and none of them have to implement OIDC directly.
Okay, I got the idea. What is unclear to me though is how you would want to perform the redirect between auth1 and auth2, because you cannot use cookies. Or would you do something like store some value for auth2 on the client side, basically cache some id or whatever, redirect to a special endpoint on auth1 and basically lookup that value? Since it is the exact same backend just listening to another pub url, it would find the value inside the cache?
Yes that could work, but you would still have redirects.
When you want to access your apps without them being able to use OIDC, then there is only trusted headers, right.
I already thought about developing something like rauthy-proxy
when I am close to v1.0. The idea was that you could run it as a sidecar (I am running all workloads on Kubernetes) which does kind of the same thing as oauth2 proxy, but natively with Rauthy without much config and setup, and using the most secure defaults.
When you do it like that, as well as with trusted headers, you have one big problem though that a lot of people forget:
You don't have a really good CSRF protection, because it technically can not work, when the downstream app does not implement some mechanism for this.
If you can be sure that all users have at least browsers that support Lax and Strict cookies(hopefully by now) and all your applications are 100% safe regarding XSS attacks, than only setting a cookie is safe these days.
When your downstream apps support OIDC, it is the safer option for sure.
A redirect for another login to a new site to the same IdP is not a big thing though. Rauthy can re-use the clients active session and issue a token without user interaction and I think this is not too bad of a UX. It depends on how locked down your config is though, of course.
There is another way to log in to multiple apps, but this is annoying as hell. :D You can request permission from the browser to set a cookie for a different domain, but this is very weird from a UX. Microsoft does this for instance and it is such a bad UX.
If you want to log in to multiple downstream apps with a "single login", I would create a tiny proxy app. This could be the callback for all your apps and it just has a list of apps it should log in to. When finished with the first flow, just redirect to app2 login, finish, then app3, and so on.
This would work and you would get a few redirects, but all of these at least without user interaction. The "bad" thing about this is that this proxy would be the MITM and if this would be vulnerable, all of your apps would have security issues.
Yeah that's basically the idea, and I'm pretty sure several of the options in the table support it. If not there's a lot of incorrect information in the table.
I needed this in the past for a kubernetes storage dashboard which did not implement any auth at all and the only option I found was oauth2 proxy, which was specifically built to serve this purpose. We were using Keycloak back then as well and it was not able to do this for sure.
Even if possible, I would not do it tbh. The problem is, that if any of your apps behind your auth reverse proxy is vulnerable for XSS attacks, they could actually steal your maybe even Strict
cookie, because they would run on the same origin as your auth provider. This is actually the only situation where Lax and Strict cookies are not safe - via another app on the same domain that is vulnerable.
Because of this attack vector and you never know how people host / build their apps, Rauthy Admin UI for instance cannot be accessed with just a token. The UI uses a session which is managed in the backend with an additional CSRF token from the local storage.
Aw good to know. Can you clarify what is meant by "True passwordless accounts with E-Mail + Magic Link + Passkey" in the feature list? Maybe you can get a magic link but have to associate it with a passkey immediately?
What I mean with this is that you can have a fully working 2FA / MFA account without ever setting a password at all. Upon registration, you will get an E-Mail with a Link to set your initial "Password". Using this link will validate your email at the same time. But you can then choose between a password and a passkey. If you choose passkey, you never need any password at all.
Some providers create problems with this Email, for instance Microsoft. They scan their customers emails and even use(!) the links inside to "protect" them from bad Links. But what Rauthy does by default is, that it sets an encrypted binding cookie once that link has been used. From that moment on, even if no data has been submitted via the form, the link can only accessed by this very browser and none else. The reason is that only having a secure URL with random, non-guessable values is not really secure. URL are always in plain text and they are even logged in plain text in a lot of places.
With this binding cookie, Rauthy makes sure that even if someone would pick such a URL out of a log, use it and would just be faster than the original user (or automated...), that it cannot happen. For an initial account sign up, this is not the end of the world, but for a password reset it is for sure.
Rauthy tries to be as secure as possible by default, but companies like Microsoft kill it. This is the only reason why you can opt-out of this binding cookie feature.
@sebadob one other question: are you interested in more attention for Rauthy? I think if it was more widely known it would likely become quite popular. Personally I've learned I don't always want more attention for my open source projects haha, so figured I'd ask before I start recommending it to people.
I am absolutely on your side here. Especially in the beginning it just costs more time than anything else, because you have so much on your TODO anyway but you have to answer issues and stuff. This is the reason why I did not do too much about it.
But, Rauthy is very close to v1.0 release now. The next phase will be about some benchmarking, code clean up, some leftover TODOs and optimization, but feature wise we are almost there. Apart from that the experimental FedCM will be implemented as well, but this does not keep us from releasing stable.
So, yes sure it would actually be nice to have some more people testing before the first v1.0.0 release. :)
This is an LDAP interface for authentication, based on the users and groups stored in kanim. That the other way then asked and can be covered with OIDC.
The Kanidm LDAP interface does a lot more than you describe here :)
Okay, I got the idea. What is unclear to me though is how you would want to perform the redirect between auth1 and auth2, because you cannot use cookies. Or would you do something like store some value for auth2 on the client side, basically cache some id or whatever, redirect to a special endpoint on auth1 and basically lookup that value? Since it is the exact same backend just listening to another pub url, it would find the value inside the cache? Yes that could work, but you would still have redirects.
Yep that's basically the idea. You still have redirects, but the difference is they happen all at once when you log in to your auth server, ie no redirects for individual apps.
When you want to access your apps without them being able to use OIDC, then there is only trusted headers, right. I already thought about developing something like
rauthy-proxy
when I am close to v1.0. The idea was that you could run it as a sidecar (I am running all workloads on Kubernetes) which does kind of the same thing as oauth2 proxy, but natively with Rauthy without much config and setup, and using the most secure defaults.
I'm curious what your motivation for making rauthy-proxy
would be, as opposed to just using an off-the-shelf reverse proxy?
When you do it like that, as well as with trusted headers, you have one big problem though that a lot of people forget: You don't have a really good CSRF protection, because it technically can not work, when the downstream app does not implement some mechanism for this. If you can be sure that all users have at least browsers that support Lax and Strict cookies(hopefully by now) and all your applications are 100% safe regarding XSS attacks, than only setting a cookie is safe these days. When your downstream apps support OIDC, it is the safer option for sure.
A redirect for another login to a new site to the same IdP is not a big thing though. Rauthy can re-use the clients active session and issue a token without user interaction and I think this is not too bad of a UX. It depends on how locked down your config is though, of course.
This is a valid point. It sucks because not needing to implement authentication makes apps significantly easier to implement. I suppose that can be mitigated with nice OIDC libraries though. And most multi-user apps still need some authorization implementation, which is more work than authentication anyway, so maybe you're not really saving that much.
That said, have you thought at all about possible ways to mitigate these CSRF concerns? Maybe some sort of system-wide Referer shenanigans built into rauthy-proxy
? Probably not doable...
If you want to log in to multiple downstream apps with a "single login", I would create a tiny proxy app. This could be the callback for all your apps and it just has a list of apps it should log in to. When finished with the first flow, just redirect to app2 login, finish, then app3, and so on. This would work and you would get a few redirects, but all of these at least without user interaction. The "bad" thing about this is that this proxy would be the MITM and if this would be vulnerable, all of your apps would have security issues.
This would mitigate the UX issues somewhat, but I agree doing an individual OIDC flow per app isn't actually that big of a deal. And honestly it's probably good training for the user to develop a slightly clearer mental model of the security they're operating under.
I needed this in the past for a kubernetes storage dashboard which did not implement any auth at all and the only option I found was oauth2 proxy, which was specifically built to serve this purpose. We were using Keycloak back then as well and it was not able to do this for sure. Even if possible, I would not do it tbh. The problem is, that if any of your apps behind your auth reverse proxy is vulnerable for XSS attacks, they could actually steal your maybe even
Strict
cookie, because they would run on the same origin as your auth provider. This is actually the only situation where Lax and Strict cookies are not safe - via another app on the same domain that is vulnerable. Because of this attack vector and you never know how people host / build their apps, Rauthy Admin UI for instance cannot be accessed with just a token. The UI uses a session which is managed in the backend with an additional CSRF token from the local storage.
This vulnerability only exists if you're running on the exact same domain, with apps at different paths, right? ie as long as you're using separate subdomains for apps you should be safe? IMO trying to get multiple apps running on the same domain is a bad idea, for this and other reasons.
What I mean with this is that you can have a fully working 2FA / MFA account without ever setting a password at all. Upon registration, you will get an E-Mail with a Link to set your initial "Password". Using this link will validate your email at the same time. But you can then choose between a password and a passkey. If you choose passkey, you never need any password at all. Some providers create problems with this Email, for instance Microsoft. They scan their customers emails and even use(!) the links inside to "protect" them from bad Links. But what Rauthy does by default is, that it sets an encrypted binding cookie once that link has been used. From that moment on, even if no data has been submitted via the form, the link can only accessed by this very browser and none else. The reason is that only having a secure URL with random, non-guessable values is not really secure. URL are always in plain text and they are even logged in plain text in a lot of places. With this binding cookie, Rauthy makes sure that even if someone would pick such a URL out of a log, use it and would just be faster than the original user (or automated...), that it cannot happen. For an initial account sign up, this is not the end of the world, but for a password reset it is for sure. Rauthy tries to be as secure as possible by default, but companies like Microsoft kill it. This is the only reason why you can opt-out of this binding cookie feature.
Generally the way I've seen this scanning behavior mitigated is by introducing an additional click in the process. So the link goes to a page that can be loaded as many times as you want, and that page contains only a single button/link that says "Click to continue logging in", and that link can only be followed once. Would that solve your problem with Microsoft or am I missing something?
I'm not sure I understand your concern with the magic link going into logs though. Are you referring to email provider logs? They already have complete control over your email, they can steal your accounts whenever they want. Same for a compromised browser. Or are you referring to a different vector? Generally I support defense-in-depth, but this seems fairly likely to result in UX issues if the user starts the flow on one devise and opens the email on another.
I am absolutely on your side here. Especially in the beginning it just costs more time than anything else, because you have so much on your TODO anyway but you have to answer issues and stuff. This is the reason why I did not do too much about it. But, Rauthy is very close to v1.0 release now. The next phase will be about some benchmarking, code clean up, some leftover TODOs and optimization, but feature wise we are almost there. Apart from that the experimental FedCM will be implemented as well, but this does not keep us from releasing stable. So, yes sure it would actually be nice to have some more people testing before the first v1.0.0 release. :)
Right on, well I for one am going to start evangelizing. This is an excellent project.
BTW, would love to hear your thoughts on FedCM, especially as compared to Mozilla Persona. We could have that conversation over on my IndieBits forums, or here or somewhere else if you prefer. I know @erlend-sh would likely be interested in joining as well.
I'm curious what your motivation for making rauthy-proxy would be, as opposed to just using an off-the-shelf reverse proxy?
I am using OAuth2 proxy for a few things, but it has one the issue. I don't know if this has been solved by now, but the version still running here has the problem, that you can't use 2 different oauth2 proxies linked to 2 different auth providers in the same environment. For instance we had one of them linked to a Keycloak instance and the other one to Rauthy. When you were logged in with one IdP, you would always end up in HTTP 500s, because the other one found an oauth2 proxy cookie, but that was set for the other IdP. The only way to get rid of the error is to either wait for an hour (haha :D ) or delete the cookie manually each time.
But since I have set them up the last time is quite some time ago now and the oauth2 proxy can do a lot more than back then. After my inital setup, someone else was maintaining the deployments.
Originally I wanted to find a solution for exactly the CSRF issue with a rauthy-proxy, and I have thought a lot about it. But to make it really secure, you cannot treat security as an addon, it must be an integral part instead. Oauth2 proxy is using double submit cookies and you can do stuff like:
Lax
Strict
This would not hurt your UX, because you can follow links and the session cookie for a GET is sent, but to actually modify something, you need to do another roundtrip to make the Strict
cookie happy. And this is fine for the most part.Expect for, as you said, when you run mutliple apps under the same domain and one of them is vulnerable...
This is a valid point. It sucks because not needing to implement authentication makes apps significantly easier to implement. I suppose that can be mitigated with nice OIDC libraries though. And most multi-user apps still need some authorization implementation, which is more work than authentication anyway, so maybe you're not really saving that much.
This is why I created the rauthy-client. When someone has to set up OIDC without knowing anything about it, this can be pretty terrifying at first. Typical OIDC libraries allow you to configure anything and everything, but most of the stuff you usually don't need. What you need are safe defaults and that's it, as long as you don't have a very special use case. So, the rauthy-client
does exactly that and not more. It doesn't let you choose encryption algorithms and stuff, it does all of that for you in the most secure way possible, and I think the examples are pretty easy to follow.
This vulnerability only exists if you're running on the exact same domain, with apps at different paths, right? ie as long as you're using separate subdomains for apps you should be safe? IMO trying to get multiple apps running on the same domain is a bad idea, for this and other reasons.
Exactly. But a lot of people think they are totally safe with Lax
cookies... which is correct.. for the most part at least. :)
Generally the way I've seen this scanning behavior mitigated is by introducing an additional click in the process. So the link goes to a page that can be loaded as many times as you want, and that page contains only a single button/link that says "Click to continue logging in", and that link can only be followed once. Would that solve your problem with Microsoft or am I missing something?
That would work if I was allowed to use JS inside an HTML Email. :D
No the idea behind this is that all the bits in pieces in between on your requests's journey through the internet are the problem. The URL is always clear text and is logged in so many places. Every router or host your request is traveling through, every firewall. And when you take a look at any app, any log, any whatever, a URL is never considered to be sensitive information. For instance, even if I am just a normal user on a linux host, I can see logs of sshd
without issues. If I could fetch a "secure" URL from such a place easily, there is nothing that could stop me from using it.
In the first version, I added short PIN / Password to this email, but people did not like it. -.-
The binding cookie is a really nice solution, as long as you don't have a spying email provider that does not care about your privacy at all and until now, this problem only came up with Microsoft accounts on an instance running at my old company where I was working. Not even google is doing this...
Right on, well I for one am going to start evangelizing. This is an excellent project.
Thank you so much! :)
BTW, would love to hear your thoughts on FedCM, especially as compared to Mozilla Persona. We could have that conversation over on my IndieBits forums, or here or somewhere else if you prefer. I know @erlend-sh would likely be interested in joining as well.
This is being implemented into Rauthy behind an experimental feature flag pretty soon. We have @sjud working on this and he already has a working example set up. The integration into Rauthy should be pretty easy, since most of the structures do exist already.
This is a really cool and interesting draft, but only when they implement BYOIDP properly. And, more important, all the other browser vendors need to follow along. If this will stay only a Google thing, it will not make it into Rauthy stable. I already hate it so much when websites are built against chromium engines only and start falling apart on other browsers.
When this is properly implemented, this is the best solution from a UX perspective. It only has one issue, which is the Cookie problem when you have multiple apps on the same origin and one of them is vulnerable. Because especially with something like FedCM, you probably want pretty long lived cookies to provide a good UX.
The cookie issue can be solved when people know what they are doing. Then it would be absolutely secure as long as your main app is, but this is most often not the case.
Originally I wanted to find a solution for exactly the CSRF issue with a rauthy-proxy, and I have thought a lot about it. But to make it really secure, you cannot treat security as an addon, it must be an integral part instead. Oauth2 proxy is using double submit cookies and you can do stuff like:
You mean oauth2-proxy is using double submit cookies for the auth page, or it's somehow injecting them into the traffic from the apps?
This is why I created the rauthy-client. When someone has to set up OIDC without knowing anything about it, this can be pretty terrifying at first. Typical OIDC libraries allow you to configure anything and everything, but most of the stuff you usually don't need. What you need are safe defaults and that's it, as long as you don't have a very special use case. So, the
rauthy-client
does exactly that and not more. It doesn't let you choose encryption algorithms and stuff, it does all of that for you in the most secure way possible, and I think the examples are pretty easy to follow.
This is a laudable effort. In my experience trying to create very simple OAuth2 clients, it's the redirects that are tricky to abstract away in a super clean way. The end result isn't too bad, but it does require developers to essentially understand the flow, which is pretty confusing at first IMO.
Generally the way I've seen this scanning behavior mitigated is by introducing an additional click in the process. So the link goes to a page that can be loaded as many times as you want, and that page contains only a single button/link that says "Click to continue logging in", and that link can only be followed once. Would that solve your problem with Microsoft or am I missing something?
That would work if I was allowed to use JS inside an HTML Email. :D No the idea behind this is that all the bits in pieces in between on your requests's journey through the internet are the problem. The URL is always clear text and is logged in so many places. Every router or host your request is traveling through, every firewall. And when you take a look at any app, any log, any whatever, a URL is never considered to be sensitive information. For instance, even if I am just a normal user on a linux host, I can see logs of
sshd
without issues. If I could fetch a "secure" URL from such a place easily, there is nothing that could stop me from using it.
Not sure how JS is involved here. What I'm trying to describe only involves links.
Also what do you mean by the URL being clear text? Are you referring to HTTP connections? HTTPS definitely hides the URL from everything until the terminating proxy, which is pretty much always controlled by a trusted entity. Only the SNI is sent in plaintext. You could certainly argue that logs can be accidentally leaked by those entities, but it depends on your thread model.
This is being implemented into Rauthy behind an experimental feature flag pretty soon. We have @sjud working on this and he already has a working example set up. The integration into Rauthy should be pretty easy, since most of the structures do exist already.
This is a really cool and interesting draft, but only when they implement BYOIDP properly. And, more important, all the other browser vendors need to follow along. If this will stay only a Google thing, it will not make it into Rauthy stable. I already hate it so much when websites are built against chromium engines only and start falling apart on other browsers.
When this is properly implemented, this is the best solution from a UX perspective. It only has one issue, which is the Cookie problem when you have multiple apps on the same origin and one of them is vulnerable. Because especially with something like FedCM, you probably want pretty long lived cookies to provide a good UX. The cookie issue can be solved when people know what they are doing. Then it would be absolutely secure as long as your main app is, but this is most often not the case.
Right on, thanks for the perspective.
You mean oauth2-proxy is using double submit cookies for the auth page, or it's somehow injecting them into the traffic from the apps?
Yes so what is does is basically pretty simple:
So it enables applications that do not have authentication to actually work with SSO in a pretty convenient way.
For instance, the pretty popular Longhorn storage provider for K8s has a nice dashboard for managing your volumes, but without any authentication at all. I am running OAuth2 Proxy in front of it which then secures the whole application for me. As I said I have not taken at look at it since some time, because by now all my stuff does work with OIDC natively, but I think I does CSRF protection with the double submit cookie pattern as well, yes.
Since it is running as a reverse proxy to the application, it can inject and validate cookies.
If your app supports it, a native OIDC implementation should be favored, but if not, the oauth2 proxy is really nice to have.
Also what do you mean by the URL being clear text? Are you referring to HTTP connections? HTTPS definitely hides the URL from everything until the terminating proxy, which is pretty much always controlled by a trusted entity. Only the SNI is sent in plaintext. You could certainly argue that logs can be accidentally leaked by those entities, but it depends on your thread model.
Yes, you usually only see the domain name in the Client Hello. TLS 1.3 can encrypt that too, but browser support is still limited.
I am coming from the enterprise world where you usually have quite a few proxies in between and they often do deep packet inspection and such things. So they decrypt the traffic and re-encrypt it afterwards again (and often not, which is really bad). The current implementation seems a bit paranoid I must confess and I already talked with quite a few people about it. My plan is to reverse the logic for these security addons and make them opt-in in favor of a better UX, because most people will never come in the situation with decrypting proxies in between. Also, the additional E-Mail verification that currently exists on the password reset form will be removed in favor of a nicer UX.
Btw, one thing that came to my mind is the confusion often between same site and same origin. I don't know if I said it correctly.
So the problem with Cookies is, that we don't have a same-origin
attribute, only same-site
. This is nice, but for instance auth.example.com
and evilapp.example.com
are not the same origin, but they are the same site. This means when you have set a cookie for auth.example.com
and access evilapp.example.com
with that cookie, it can read it.
So, different domains is fine, but a different sub-domain will always receive the cookie.
BTW, would love to hear your thoughts on FedCM, especially as compared to Mozilla Persona. We could have that conversation over on my IndieBits forums, or here or somewhere else if you prefer. I know @erlend-sh would likely be interested in joining as well.
Reading the FedCM MDN docs, it appears as though it's up to relying parties to add the IDP to any page for authentication. So rather than being a distributed "byo account" service, it looks a lot more like a way that you can just add "auth with google" to your service.
@Firstyear that is what it is at risk of becoming, but there is an open call for indies to implement true BYOIDP: https://github.com/sebadob/rauthy/discussions/145#discussioncomment-8831943
@erlend-sh Given the current browser situation, and what's already happened to webauthn at the hands of ms/google, I wouldn't be optimistic. :(
and what's already happened to webauthn at the hands of ms/google
@Firstyear what exactly do you mean with that?
They often simply "destroy" things to make more business. So many annoying things, instead of "creating a better internet". This is simply not their goal, even if they sell it as their tag line.
I stopped self-hosting email because Microsoft just decides to block mail from your server randomly, when you are too small. You can contact them and make sure to them you are not a spammer, but they don't care, simply ignore you. -.-
Btw, one thing that came to my mind is the confusion often between same site and same origin. I don't know if I said it correctly.
So the problem with Cookies is, that we don't have a
same-origin
attribute, onlysame-site
. This is nice, but for instanceauth.example.com
andevilapp.example.com
are not the same origin, but they are the same site. This means when you have set a cookie forauth.example.com
and accessevilapp.example.com
with that cookie, it can read it. So, different domains is fine, but a different sub-domain will always receive the cookie.
@sebadob I've been thinking more about the subdomain cookie/trusted header problem, and have an idea. Here's the problem as I understand it:
With trusted header auth you sign in to auth.example.com
, and it sets a cookie with the Domain attribute set to .example.com
. Then whenever a request is checked at the forward auth endpoint, if it has a valid cookie it sets some sort of UserID
header, which is then trusted implicitly by any upstream apps, say app1.example.com
and app2.example.com
.
The problem is if app2.example.com
is evil, or has an XSS vulnerability, it can send AJAX requests directly to app1.example.com
, and the auth cookie will be sent even it SameSite is strict or lax, because they are on the same same, enabling app2.example.com
to perform whatever actions it wants on app1.example.com
.
Could this possibly be solved by having an additional cookie per domain? So basically when you navigate to app1.example.com
for the first time, it sets a simple cookie with no Domain attribute, which means it won't be shared with other subdomains. Then when making requests, the auth server only sets the UserID
header if the request contains both an auth cookie and a valid cookie for the exact domain as the host.
I'm not a security expert and very likely missing something obvious. Thoughts?
I'm not a security expert and very likely missing something obvious. Thoughts?
No, this is correct. This is the reason why Rauthy does not set a domain attribute on its cookies, only the path to /auth
.
The problem is, that not all user agents treat cookies without a domain attribute correctly and just set it by default to "the current one" if it's omitted. If have not tested this in a while, if at least by now all major browser do this correctly, but this is my latest information on this.
For instance, when I check Rauthy's cookies no inside my browser, it shows the domain attribute, even if I do not set this on purpose. But I am not sure, if they treat it correctly and maybe just show it for convenience.
I would need to do some tests to check all current browsers, how they handle this.
The problem then is, if someone is using IE8 or something :D . But you cannot protect these people imho and if you try to do so, security is like 95% of your code even for a todo app. When I do stuff like this, I do not use bleeding edge technology, but I consider everything stable that at least all major browsers can do for like half a year. Within that time, people should have upgrade at least.
It would be interesting to know what the current state is on modern browsers regarding cross domain sharing with explicitly not set domain attribute.
Setting the Domain attribute to the full subdomain should solve that, right?
Setting the Domain attribute to the full subdomain should solve that, right?
That's how my cookies currently look like, when I omit the domain attribute. I guess this should fix the issue with the sub domains. But I can't guarantee that, I have not tested it yet.
Edit:
But if this works, it would actually fix the last big problem with cookies so far. :)
Well, this would solve the issue as long as you don't have sub-sub-domains. :p
Let's say I have my app running at iam.example.com
and the domain is set to exactly that, then evil.example.com
would not get the cookie, but sub.iam.example.com
would get it again afaik. But yes, would need to set up some test env.
Even if that is the case, you could strip the auth cookies from the requests before forwarding them to the apps, ie only including the UserID
header. I actually like that idea anyway. No reason for the apps to see those cookies (or have to parse them).
The main problem is this requires a compatible reverse proxy. Not sure if any of them allow you to strip headers when using forward auth, since the main point is to append headers.
@mholt if you have a few cycles to look at this, I'd appreciate your take. I think it's a fairly serious security concern and might be worth handling in Caddy's forward_auth implementation.
For brevity you should be able to jump in just a few messages back, starting with https://github.com/sebadob/rauthy/issues/337#issuecomment-2075449429
EDIT: NVM I don't think there's anything that would need to be changed in Caddy (see my comment below). Should be able to just strip the headers as normal. Would still appreciate your opinion on whether my idea would solve the problem.
Usually you can strip headers without issues and you even must strip certain headers when doing reverse-proxying. I don't know a reverse proxy that can't do it.
I just had a second look and when I omit the domain with Rauthy's cookies, the browser does set the domain to "the current one" but also set's the HostOnly
flag automatically. I am not sure though if that means sub-sub-domains can see it or not.
Oh duh. For some reason I was thinking you would need to strip the headers in the forward auth configuration itself, but it should work fine to just strip them as you normally would in the reverse proxy.
@anderspitman That's a lot to read for context :sweat_smile: (even with the summary comment). If the crux of the matter is this:
Not sure if any of them allow you to strip headers when using forward auth, since the main point is to append headers.
Then no worries as you discovered (probably). Caddy lets you strip headers by prefixing the header field name with -
.
@mholt no worries. Thanks for checking in!
@anderspitman It should work as expected in all current browser. Taken from the MDN docs:
If you omit the domain completely, not even sub-sub-domains could access it from these docs.
Oh duh. For some reason I was thinking you would need to strip the headers in the forward auth configuration itself, but it should work fine to just strip them as you normally would in the reverse proxy.
Proxies often need to strip certain outgoing headers to prevent leakage of internal data, and they need to adjust hops and so on, but they should care this without you needing to think about this.
Seems like the double cookie thing might work then.
Could this possibly be solved by having an additional cookie per domain? So basically when you navigate to app1.example.com for the first time, it sets a simple cookie with no Domain attribute, which means it won't be shared with other subdomains. Then when making requests, the auth server only sets the UserID header if the request contains both an auth cookie and a valid cookie for the exact domain as the host.
Yes I mean, that can be solved, but then we are at native OIDC per app again. Basically have separate clients and signed tokens, for which you validate the aud
/ azp
, so you can use a token for client a only there and client b would reject it.
Trusted headers should only be used if you are sure that there are no hostile hosts / apps.
I mean you could start setting different headers for different hosts, which they look for and keep this "secret" somehow. But tbh, then implementing OIDC is far easier at some point. :D
All other idea's how you could solve a "be authenticated immediately everywhere" would involve oauth / oidc and client side app modification. But when you need to modify your client app anyway, you could just implement OIDC directly and have it way more secure.
I think you're describing something a lot more complicated than what I have in mind. Again, I'm likely missing something.
The goal is to determine if a request is cross-origin between two apps, and only include trusted headers for requests that are not cross-origin. If you can do that, you should be protected against evil/exploited apps.
My proposal for detecting this is to set a simple cookie for every app. Doesn't need to have any signing or randomness. Just needs to exist. Those cookies will not have the domain set, so they should never be sent cross origin. When the auth server is asked to validate a request, it considers it valid as long as it has a .example.com
auth cookie, but it only sets the trusted auth headers if it has both a .example.com
cookie and an app1.example.com
cookie, which indicates the request is not cross-origin.
Note that cross origin requests are still considered valid, but the trusted auth headers will not be set, so the upstream app will reject the requests unless they contain proper authorization tokens of some sort. But that falls under the normal security of the app, not a systemic problem like what I'm trying to address.
Ahh now I got what you mean.
Yes I guess this would be a solution, if you need to host your apps under the same sub domain. That's actually pretty clever.
Actually I think I'm overcomplicating it. I think checking the Origin
header should be sufficient.
No its not. Because the attack vector you want to mitigate is coming from another evil backend app. This means the app can easily fake the origin header, because the request would most probably not come from any browser. The origin is just another http header you can set as you like. Only inside the browser, it will overwrite and set it for you and when a request is coming from there, the host only cookie alone will do the job.
My cookie plan would also be easily thwarted in that case.
That said, you shouldn't need to worry about evil app backends making requests. They won't even have access to the .example.com
auth cookies because those will be stripped by the reverse proxy.
Hmm right. As long as the other evil apps never get your cookies, then origin header checking would be fine. Then you only need to worry about really making sure to strip headers correctly.
But then I am asking myself again, wouldn't it be easier to just use OIDC everywhere? Then you would not need to worry about all of this.
The point is that the apps don't have to worry about this. You implement it once in your auth server and every app gets authentication for free, needing only to trust the headers provided.
With OIDC every app has to implement it.
Yes, but then you need to implement the header trust in every application, while at the same time you could implement a token trust as well, which would even be less work when you use something like the rauthy-client
.
You could set the token just in the same way as you would set auth headers. And implementing the whole oidc flow is just very few lines of code when you have a good client library, while it gives you a lot more information and flexibility. And it simplifies your whole setup, because you don't even need the reverse proxy, when you don't want to. You also don't have do redo the whole config when you run your apps in a new environment for instance or if you change your reverse proxy.
I mean, both approaches have their ups and downs obviously. But without doing anything in your clients, I don't see how this should work. Even when you place something like the oauth2 proxy in front to authenticate an app that can't do any auth at all, you still have to configure and set up the oauth2 proxy itself and manage the deployment.
I mean, for instance with the rauthy client, your only need to implement 2 API handlers:
This one does 2 things at once:
pub async fn get_auth_check(config: ConfigExt, principal: Option<PrincipalOidc>) -> Response<Body> {
let enc_key = config.enc_key.as_slice();
rauthy_client::handler::axum::validate_redirect_principal(
principal,
enc_key,
OidcCookieInsecure::No,
OidcSetRedirectStatus::Yes,
)
.await
}
The other one you need is the callback endpoint:
pub async fn get_callback(
jar: axum_extra::extract::CookieJar,
config: ConfigExt,
params: Query<OidcCallbackParams>,
) -> Response<Body> {
let enc_key = config.enc_key.as_slice();
let callback_res =
rauthy_client::handler::axum::oidc_callback(&jar, params, enc_key, OidcCookieInsecure::No).await;
let (cookie_str, token_set, _id_claims) = match callback_res {
Ok(res) => res,
Err(err) => {
return Response::builder()
.status(400)
.body(Body::from(format!("Invalid OIDC Callback: {}", err)))
.unwrap()
}
};
let body = templates::HTML_CALLBACK
.replace("{{ TOKEN }}", &token_set.access_token)
.replace("{{ URI }}", "/");
Response::builder()
.status(200)
.header(SET_COOKIE, cookie_str)
.header(CONTENT_TYPE, "text/html")
.body(Body::from(body))
.unwrap()
}
You could even just copy & paste this mostly and have your app OIDC enabled. Half of the second endpoint could even be stripped away as well, if you just want to use double submit cookies or things like that. When you implement something custom in each app to extract the headers, define middlewares, or even call some functions for validation in each endpoint, I think you would have at least the same amount of code if not more.
At the same time, you would have full flexibility, all information from the id token and you can even react dynamically or extract groups and roles in a much nicer way.
Having an application not worry about security sound's a bit wrong already I think. Security is not an addon but an integral part imho and its even the first thing I implement in any app before I am doing anything else.
I developed obligator with many of the same design goals as Rauthy. Honestly if I had known about Rauthy at the time (it was brought to my attention recently by @erlend-sh) it might have met my needs better than making my own from scratch. Rauthy is much more complete. Anyway, I was wondering if you might help me fill out/fix any errors I may have made concerning Rauthy in the comparison table here. You can suggest edits to the source spreadsheet here. Thanks!