Open assignUser opened 1 year ago
Can you maybe quote the recommendation you are referring to, and also share more about which use case you think about? :)
Some thoughts (this may all be very premature given that I don't yet understand the exact use case /challenge you have in mind): login is the process of exchanging long-lived primary credentials (e.g. password) for short-lived secondary credentials (e.g. an authentication token). Login itself always needs some kind of long-lived credential as input (in single sign-on this is often a long-lived cookie or a password). For machine-to-machine login one does typically not use one of the human-oriented SSO flows. For machine-to-machine logins ("service account login") we inevitably need some kind of long-lived credential, it's of course an important question where it's stored and injected.
With OpenID Connect (OIDC), you can take a different approach by configuring your workflow to request a short-lived access token directly from the cloud provider.
This will obviously entail implementing oidc Serverside in conbench and I have no idea if that is feasible but given the recent CCI incident it seems like an interesting option to investigate.
Instead of saving the creds as secret
When you wrote this, what did you mean with "the creds"? Did you think of cloud provider credentials to have the workflow talk to cloud provider API, or did you think of conbench service account credentials to have the workflow talk to a Conbench API?
This will obviously entail implementing oidc Serverside in conbench
We have bits and pieces for OpenID Connect in Conbench already. It's important to distinguish between the identity provider (IdP) side of things (e.g. Google Accounts), the relying party (e.g. a SaaS-like API), and the user agent (e.g. a browser).
One could think of a situation where one can establish trust between a Conbench instance (as the relying party) and a foreign OIDC IdP (for example the GitHub Actions IdP), so that a robot can use that Conbench instance's API via an OIDC access token emitted by that foreign OIDC IdP.
I read through that documentation chapter that you linked. Thanks! As far as I understood, this is at the high level about being able to use cloud provider APIs from within GitHub Actions without the need to store long-lived secrets for cloud provider API authentication.
To that end, they describe the following architecture:
https://token.actions.githubusercontent.com
.(I know a few things about single sign-on architectures, SAML, and also OpenID Connect, have worked rather intensively with the OIDC standard, and have worked in the bowels of both RP and IdP implementations. The github doc is IMO not well-written; it unsurprisingly suffers from the typical problem in the SSO space: terminology is used in a highly inconsistent fashion)
The github doc is IMO not well-written; it unsurprisingly suffers from the typical problem in the SSO space: terminology is used in a highly inconsistent fashion)
Very, very true. It took me multiple skims and then stopping and actually deeply reading to get much out of this.
To that end, they describe the following architecture: ...
This is what I got out of thehttps://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-cloud-providers section as well (not that you need me to tell you what you read — but given how confusing these are I'm glad to know one other human came to similar conclusions!). That being the case, how much of a lift would it be for us to try this out?
It looks like build kite might also be able to send OIDC tokens (https://buildkite.com/docs/agent/v3/cli-oidc#main) in case we wanted to try that — but even if we started off github (actions) only, that would be an improvement and we could figure out if this is useful.
@jgehrcke Ah I see you have this well in hand. Thank you for this very informative write up! I have no experience with OIDC, but will gladly uses this to watch and learn ^-^
I agree that the github docs (in general) are of quite mixed quality...
When you wrote this, what did you mean with "the creds"?
I was going off the current situation were conbench username/email + password are provided via envrionment secret to the workflow.
That being the case, how much of a lift would it be for us to try this out?
I hate doing that, but it's unclear what "this" is that you are referring to here :see_no_evil:.
I was going off the current situation were conbench username/email + password are provided via envrionment secret to the workflow.
Gotcha.
Cloud provider credentials deserve extra protection care. I think that long-lived Conbench credentials can for now be injected via standard secret-passing mechanisms. This will carry us real far.
Building integration between a Conbench OIDC RP and a dynamically configured OIDC IdP is a lot of work.
Building integration between a Conbench OIDC RP and [Github Actions IdP, BK IdP] is also quite a bit of work, and the result is not yet useful without further access control work: imagine a Conbench instance that every GitHub Actions job in the world can POST results to.
imagine a Conbench instance that every GitHub Actions job in the world can POST results to.
Ah that's not quite the intended result 😂
That being the case, how much of a lift would it be for us to try this out?
I hate doing that, but it's unclear what "this" is that you are referring to here 🙈.
"This" I meant setting up Conbench OIDC RP to Github Actions IdP (+ possibly the future access control work you mention). So, when you say "Building integration between a Conbench OIDC RP and a dynamically configured OIDC IdP is a lot of work." and "Building integration between a Conbench OIDC RP and [Github Actions IdP, BK IdP] is also quite a bit of work" that answers the question!
On the topic of "How to authenticate your CI job so that it can submit benchmark results to your Conbench deployment".
There are various methods. Three of them, with my brief assessment:
Problem with password-based login: DDoS as of many-roundtrip hashing (verifying a password is by definition required to consume a lot of CPU time). For making the point, think: can do O(1) password login per second, can do O(1000) service account logins per second. The difference comes from the amount of entropy in the 'shared' secret (not shared when doing PKI).
Also important: login is rare, authentication is frequent. Login might consume a lot of CPU on the backend, authentication should not. API clients that log in before each HTTP request are malicious and and annoying, but the only way tio deal with those automatically is via rate limiting around login.
(This is a bit of thought dump along various dimensions, I know. But it all belongs to the question of "which login method(s) should we offer to API clients?")
I created https://github.com/conbench/conbench/issues/735 to remove a little bit of ambiguity, and to go back to problem space before entering solution space (the title of this ticket proposes a specific solution, but in the course of the discussion we learned that we have to narrow down the problem first).
GitHub recommends to use https://openid.net/connect/ where possible to avoid having to store long lived credentials as secrets.
Instead of saving the creds as secrets the actual token is ephemeral and created and valid only during a CI run.