hashicorp / vault

A tool for secrets management, encryption as a service, and privileged access management
https://www.vaultproject.io/
Other
30.88k stars 4.17k forks source link

Proposal: JWT Claim-based OIDC Auth Backend #2525

Closed mwitkow closed 6 years ago

mwitkow commented 7 years ago

Kubernetes supports authentication (and group extraction forth authorization) using OICD (OpenID Connect) JWT id_tokens tokens, see here for docs. Basically JWT tokens are crypto-verifiable JSON key-value pairs called "claims".

For Kubernetes Auth, two such claims are used:

Both KeyCloak and Dex are configurable OpenID Connect servers that can delegate to upstream identity providers (e.g. Azure or Google).

This proposal is about introducing an Auth Backend that is a configurable, generic OICD backend that uses JWT token validation.

Contrary to what's been discussed previously in #465, OIDC doesn't require browser flows to be used, and such is not an obstacle for Vault adoption. They can be used in exactly the same fashion as GitHub personal tokens, by copy-pasting.

 vault auth -method=oidc token=<id_token>

In fact this is exactly what K8S's kubectl is expected to be used, with --token flag.

A couple of other considerations:

The K8S oicd plugin seems fairly straightforward and could act as a basis for this work. We'd actually be willing to send in PRs for this if Vault maintainers would accept them :)

jefferai commented 7 years ago

Hi there,

Consuming OIDC for auth is a totally fine endeavor IMHO. (What we're not targeting currently is Vault being a provider, but that's separate from this.) I'd suggest working up a doc with a proposed API and describing the overall structure and we can do a review before you commit to code writing. Thanks!

pidah commented 7 years ago

@mwitkow we are interested in this proposal too and happy to help where we can.

mwitkow commented 7 years ago

@jefferai here's a proposal design doc for the OIDC auth backend for Vault: https://docs.google.com/document/d/1saxtZMuh3OYilpa0BvP5_Ien_6U5bqdqOUj5AATdTtc/edit?usp=sharing

The one thing where I'm lacking context is how Vault AcceptanceTests work, but a straw-man proposal for using them is outlined in the document.

@ericchiang, CoreOS Dex maintainer and K8S OIDC contributor for context.

jefferai commented 7 years ago

@mwitkow Thanks for writing this up! I won't have time to get to it in depth until early next week, probably, but one thing I wanted to make you aware of up front: we do not allow the use of the PathMap/PolicyMap constructs in new backends, so you're not going to get an automatic API, and the proposal should sketch out what an API for the backend looks like.

ericchiang commented 7 years ago

@mwitkow hey yeah I'd be happy to provide code reviews and feedback on this! Overall content seems reasonable.

Please let us know if you need anything on the github.com/coreos/go-oidc side of things or if any explanations of OpenID Connect related things would be helpful.

mwitkow commented 7 years ago

Jeff, I understand why you guys don t want to support the path to policy mapping. Do you have an example of an API I could follow? As long as it's a mapping (JSON or path) from external names to Vault policy enamel we should be fine here πŸ™‚

jefferai commented 7 years ago

Hi @mwitkow ,

Generally these days we use config/ for config paths, roles/ for roles, users/ for users and groups/ for groups. Pretty straightforward. In your doc you are using teams, which probably would slot into the "roles" vernacular. It's pretty easy to change later on (at least until it's publicly released) but one of the reasons we don't use those maps anymore is that they make changing anything super not-straightforward :-)

mwitkow commented 7 years ago

Cool, would you rather have a separate config variable that matches JWT claims to Vault groups and roles then? (can you link me something that defines them clearer in your vernacular?) That actually very much aligns with what we want to do πŸ™‚

On Fri, 31 Mar 2017, 22:09 Jeff Mitchell, notifications@github.com wrote:

Hi @mwitkow https://github.com/mwitkow ,

Generally these days we use config/ for config paths, roles/ for roles, users/ for users and groups/ for groups. Pretty straightforward. In your doc you are using teams, which probably would slot into the "roles" vernacular. It's pretty easy to change later on (at least until it's publicly released) but one of the reasons we don't use those maps anymore is that they make changing anything super not-straightforward :-)

β€” You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hashicorp/vault/issues/2525#issuecomment-290830673, or mute the thread https://github.com/notifications/unsubscribe-auth/AJNWowOyPFEdpb6pkQn5JM87PSXHD6eDks5rrWuZgaJpZM4Ml9kn .

jefferai commented 7 years ago

@mwitkow How familiar are you with Vault generally? Maybe it would be good to have a call at some point and discuss what you're trying to do, because who defines what, where, depends pretty strongly on what use cases you're trying to solve.

mwitkow commented 7 years ago

Sure happy to grab a chat πŸ™‚ anything that somewhat overlaps with BST timezone would work. If you're on Gapps just stick something in my "Michal(weird symbol)improbable.io" calendar πŸ™‚

mwitkow commented 7 years ago

@jefferai I've updated the proposal to follow the Okta Auth Backend, and remove the PolicyMap bit:

Please see: https://docs.google.com/document/d/1saxtZMuh3OYilpa0BvP5_Ien_6U5bqdqOUj5AATdTtc/edit#

Would be great if we could get this signed off before we start dabbling in code :)

mikeokner commented 7 years ago

@mwitkow: I've been talking to @jefferai regarding #2571 and I think it may be possible to incorporate your proposed changes into mine. There are some concerns that mine won't be very widely-usable because most Oauth2 providers don't make the password flow available to 3rd parties, however I believe it wouldn't be too difficult to add the ability to use an auth code or OIDC token as well. Currently, I'm using the golang/oauth2 library and also following the Okta backend's general pattern. I'm not as familiar with the OIDC flow you're looking to implement, so please take a look at what I've already written and see how tall an order you think it would be to implement in the same backend alongside my existing changes vs. starting from scratch.

jefferai commented 7 years ago

@mwitkow For some more context around what @mikeokner said, Ping Identity doesn't return an OIDC token with their flow (just an OAuth2 bearer token), but it does implement the UserInfo endpoint that provides the same parameters. We've been backchanneling and I suggested that they take a look at this proposal and how it could be modified to fit this kind of workflow, where it can also perform the OAuth2 handshake, and use a UserInfo endpoint as the source of truth rather than only a token. It'd be nice to have a sort of full-service oauth/oidc backend rather than carving things out across multiple backends (potentially with a lot of duplicate code). They have some code that does a fairly standard OAuth2 resource owner flow that could serve as a start towards a combined proposal.

mwitkow commented 7 years ago

@jefferai @mikeokner OIDC is a superset of OAuth2. It basically adds:

As far as I can see Ping does support optional OpenID Connect 1.0, and it does return Identity Tokens, so you should be able to integrate it.

The doc https://github.com/coreos/dex/blob/master/Documentation/openid-connect.md by @ericchiang is probably a great introduction to OpenID Connect.

Personally, I would prefer an OpenID Connect approach, as it is a standard that doesn't require users to configure things such as:

jefferai commented 7 years ago

@mwitkow We fully realize it's a superset of OAuth2. And since Ping (sort of) supports OIDC that's a reason I'd like to see them merging.

However:

ericchiang commented 7 years ago

Ping doesn't always return an ID token depending on how you auth, even though in those cases it does support the UserInfo endpoint

The id_token is mandatory in the response[0]. Is the issue that users will login to ping not using the openid scope? If Ping supports OpenID Connect, there surely must be a way to always get an ID token.

Not requiring users to fetch OIDC tokens ahead of time (for not just Ping but other providers that support OIDC tokens being returned) can be nice UX.

But they still need to fetch an access token which can hit the userinfo endpoint?

Are these concerns around Pings OpenID Connect implementation or general OpenID Connect?

[0] https://openid.net/specs/openid-connect-core-1_0.html#TokenResponse

mwitkow commented 7 years ago

@jefferai

Not requiring users to fetch OIDC tokens ahead of time (for not just Ping but other providers that support OIDC tokens being returned) can be nice UX.

In that case, the vault CLI client would need to implement the OAuth2 flow. The reason why I opted out of it in the proposal, is that it is:

The reason why in the design I opted to have a "ready-made" token passed, is to simplify all this and make it "pluggable" by some third party (such as a subcommand).

jefferai commented 7 years ago

The id_token is mandatory in the response[0].

If what's coming back is an OIDC TokenResponse.

Is the issue that users will login to ping not using the openid scope? If Ping supports OpenID Connect, there surely must be a way to always get an ID token.

According to the people currently running a Ping identity server who have tried this out, it doesn't, because it depends which OAuth2 flow you've used to authenticate to Ping. Some flows return only OAuth2 bearer tokens, some also return OIDC tokens. Maybe @mikeokner can get in touch with Ping support and find out if there is a way to enable it for all flows, but they have not yet found a way to do this. But OIDC does mandate a UserInfo endpoint, and Ping appears to always support that whether you're coming in with a bearer token or OIDC token.

And, generally speaking, there are lots of OIDC providers where the nicest user experience isn't to go log in through a web site and copy/paste a token but to simply provide the user/pass to Vault's login endpoint.

mikeokner commented 7 years ago

The id_token is mandatory in the response[0]. Is the concern that users will login to ping not using the openid scope? If Ping supports OpenID Connect, there surely must be a way to always get an ID token.

We haven't been able to get an ID token back when using the resource owner password flow. Other flows to return it. I don't know enough to say whether that's a failing on Ping's side or a config somewhere we're missing.

In that case, the vault CLI client would need to implement the OAuth2 flow.

That's exactly what I did :)

But they still need to fetch an access token which can hit the userinfo endpoint?

complicated to ds: implementing an HTTP server in a CLI command with a browser AuthCode flow (e.g. popping out a browser)

Those two reasons are why we went the resource owner password flow in my PR, not auth code. It's an available flow with our IDP and makes things easy for the end user.

I could see a combined implementation that allows auth with either username/password or token, and then reads group assignments from the userinfo URL or id token if present.

jefferai commented 7 years ago

@mwitkow Honestly, I don't really see why passing in a bearer token and hitting up UserInfo vs. passing in OIDC only is really a problem.

ericchiang commented 7 years ago

Ah okay. I took a look at https://github.com/hashicorp/vault/pull/2571

We haven't been able to get an ID token back when using the resource owner password flow.

A lot of providers don't implement this flow, and I think they're right in avoiding it. It steps around any 2 factor auth that the provider might implement, for example.

@mwitkow Honestly, I don't really see why passing in a bearer token and hitting up UserInfo vs. passing in OIDC only is really a problem.

Because the id_token is signed and has things like the intended audience (client_id) so others can ensure that the user logged in through an authorized client.

jefferai commented 7 years ago

A lot of providers don't implement this flow, and I think they're right in avoiding it. It steps around any 2 factor auth that the provider might implement, for example.

Depends, e.g. if the provider sends a push notification for 2nd factor before responding with authorization information (Ping supports this as do others).

Because the id_token is signed and has things like the intended audience (client_id) so others can ensure that the user logged in through an authorized client.

And yet, in cases where this isn't possible, getting the info from UserInfo still allows for significant code reuse and serves the needs of other users.

mwitkow commented 7 years ago

I looked at #2571, it only implements part of the OAuth2 spec, namely the Client Credentials Grant (https://tools.ietf.org/html/rfc6749#section-4.4). This is the least secure form of OAuth2 authentication, and most providers do not provide it. It basically prevents strong security by passing credentials t (passwords) hrough a third party (in this case vault), and removes the possibility for using two-factor auth.

The most used, and most widely supproted OAuth2 flows are:

The PR in #2751 doesn't support them, as it doesn't deal with Access Tokens at all.

@jefferai I would be ok with having a combined OIDC and OAuth2 provider, if both were only based on Tokens:

Depends, e.g. if the provider sends a push notification for 2nd factor before responding with authorization information (Ping supports this as do others).

This is not part of any OAuth2 standard. Can you provide a list of other providers other than Ping that supports it?

That's exactly what I did :)

By auth flow, I meant the Authorization Code flow, which requires browser interaction and a callback URL to a localhost webserver. I don't see that in your PR, is it in another branch perhaps?

mikeokner commented 7 years ago

I looked at #2571, it only implements part of the OAuth2 spec, namely the Client Credentials Grant

It's actually the Resource Owner Password flow, but yes, most providers (especially public SaaS offerings like G-Suite) don't provide it. But as @jefferai mentioned, it doesn't totally preclude the use of MFA for providers like Ping that use a push notification and app before responding to the original request with an auth token.

For providers who do provide it, it's the most convenient option for Vault cli users as there's no need to pre-fetch a token or pop a browser. It's part of the official oauth2 spec, so I don't see why a generic oauth/oidc backend for Vault shouldn't implement it. The Okta backend already uses username/password, so it's not like Vault isn't already handling external credentials, either.

Edit:

This is not part of any OAuth2 standard.

MFA isn't anywhere in the spec. How the resource owner is actually authenticated, how many factors, etc., is solely up to the provider to implement.

ericchiang commented 7 years ago

And yet, in cases where this isn't possible, getting the info from UserInfo still allows for significant code reuse and serves the needs of other users.

@jefferai when developing these plugins for Kubernetes, we found the more common case is that the server (Kubernetes, Vault) doesn't want to have to trust every client that the provider issues. It eliminates the ability for using public providers (e.g. Google), and even in private provider deployments means that you have to have close control over who can be issued client credentials.

For this alone I think accepting id_tokens would be a good idea, even if Vault chooses to accept access_tokens as well.

jefferai commented 7 years ago

@ericchiang I'm not suggesting ignoring id_tokens, I'm suggesting that if what comes in isn't a JWT that it can fall back to a UserInfo lookup.

jefferai commented 7 years ago

(Recognizing that this puts more configuration requirements (or possible .well-known support) into the backend!)

bkrodgers commented 7 years ago

Despite the OIDC spec's claim that it's a superset of OAuth2, it really isn't. It drops two flows (client credentials and resource owner) that are part of the OAuth2 spec from its own definition, which leaves ambiguity for how in what I'll call "OIDC" mode a provider should support them. Ping has chosen not to. The specs are a complete mess, but that doesn't mean we can't make this work. For purposes of users accessing Vault, the Resource Owner flow provides -- by far -- the best experience.

While the Resource Owner flow is indeed generally discouraged, and not supported by some public implementations, it's perfectly fine for situations where you're running an IDP with your own organization's identities and you fully trust the resource server (in this case Vault). Both the Okta and LDAP backends are already collecting credentials on behalf of the user. This would be no different.

We (I'm with https://github.com/hashicorp/vault/pull/2571) are not proposing that we wouldn't also support the use you suggest. We're trying to come up with a common implementation so there doesn't need to be multiple flavors of an OAuth/OIDC backend. Your use case is valid, and so is ours.

As for MFA being part of a spec, it's not part of OAuth2, OIDC, or SAML. It's completely up to the provider if and how they support it. Ping does though. Others may too, but that's up to their own implementation.

mwitkow commented 7 years ago

Jeff, from a code reuse point of view, the only code shared between the two implementations would be the users, groups and roles policy mappings.

OIDC flow doesn't require talking to an OAuth2 server at all, instead it depends on JWT parsing, discover url parsing and cert fetching (on initialization). All of this is done inside go-oidc.

I would love to see a full spec OAuth2 implementation with AuthCode+UserInfo and the MFA flows. They all require talking the OAuth2 protocol per request.

So for practices purposes I'd recommend splitting them into OAuth2 and OIDC backends.

I'll do a strawman PR next week for a pure OIDC one to demonstrate.

On Fri, 7 Apr 2017, 18:17 Brian Rodgers, notifications@github.com wrote:

Despite the OIDC spec's claim that it's a superset of OAuth2, it really isn't. It drops two flows (client credentials and resource owner) that are part of the OAuth2 spec from its own definition, which leaves ambiguity for how in what I'll call "OIDC" mode a provider should support them. Ping has chosen not to. The specs are a complete mess, but that doesn't mean we can't make this work. For purposes of users accessing Vault, the Resource Owner flow provides -- by far -- the best experience.

While the Resource Owner flow is indeed generally discouraged, and not supported by some public implementations, it's perfectly fine for situations where you're running an IDP with your own organization's identities and you fully trust the resource server (in this case Vault). Both the Okta and LDAP backends are already collect credentials on behalf of the user. This would be no different.

We're not proposing that we wouldn't also support the use you suggest. We're trying to come up with a common implementation so there doesn't need to be multiple flavors of an OAuth/OIDC backend. Your use case is valid, and so is ours.

As for MFA being part of a spec, it's not part of OAuth2, OIDC, or SAML. It's completely up to the provider if and how they support it. Ping does though. Others may too, but that's up to their own implementation.

β€” You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hashicorp/vault/issues/2525#issuecomment-292597560, or mute the thread https://github.com/notifications/unsubscribe-auth/AJNWozrOxdsdOwuLMBcOSFhnVEBDOzxQks5rtm--gaJpZM4Ml9kn .

bkrodgers commented 7 years ago

As long as you don't foresee ever wanting to add anything to yours to obtain the OIDC token, then you are right that it wouldn't have a ton of code reuse. They're still in somewhat overlapping subject areas though. That said, we can keep them separate if Jeff's OK with that.

mikeokner commented 7 years ago

Jeff, from a code reuse point of view, the only code shared between the two implementations would be the users, groups and roles policy mappings.

Maybe @ericchiang can chime back in with some additional suggestions, but coreos/go-oidc appears to heavily leverage the golang/oauth2 library I'm already using.

So, you'd end up with a pretty substantial amount of shared code in my opinion:

ericchiang commented 7 years ago

Maybe @ericchiang can chime back in with some additional suggestions, but coreos/go-oidc appears to heavily leverage the golang/oauth2 library I'm already using.

From the code @mwitkow is proposing, the CLI wouldn't have OAuth2 login logic, it'd only accept a bearer token that happens to be an ID token. All the OIDC code is server side. It actually doesn't even need the OAuth2 package, but will import it because we reference it in the package API.

mikeokner commented 7 years ago

Aha, so just to be sure I'm understanding this completely, when the user authenticates via Vault CLI by executing something like:

$ vault auth -method=oidc -token=<token>

Then <token> would be an OIDC ID token already acquired somehow from a token response? I was thinking it would be an auth code but now that I think about it, I don't really think either is a great solution. I'm unclear on how the user is expected to actually get that token in the first place. In my experience, GitHub making it easy to get a personal access token outside a browser-based redirect flow is the exception, not the rule.

@mwitkow, you initially stated that

Contrary to what's been discussed previously in #465, OIDC doesn't require browser flows to be used, and such is not an obstacle for Vault adoption.

but I think that's only because half the flow is being ignored in your proposal. How is the user supposed to get a token in the first place? Is there a standard Oauth2 or OIDC process for acquiring tokens programmatically or manually or any other way outside a browser redirect flow?

ericchiang commented 7 years ago

Then would be an OIDC ID token already acquired somehow from a token response?

Yes, that's what we do in Kubernetes. Users build an oauth2 client, and return the id_token to the user. We actually prefer it because different companies have different needs for those portals. Branding, authz requirements (only people with this email domain can login), etc. Users login through these then are presented with commands to run which configure the command line tool (or download a configuration).

We have an extremely bare-bones implementation over in the dex repo of an example client[0].

How is the user supposed to get a token in the first place? Is there a standard Oauth2 or OIDC process for acquiring tokens programmatically or manually or any other way outside a browser redirect flow?

There aren't good standards for non-browser based flows, or at least no good standards that a lot of providers implement. kubectl for instance doesn't login, but instead consumes a pre-fetched id_token (with optional refresh token).

OIDC providers that do implement non-web flows can also let their own command line tool fetch the id_token, it's generally out of spec as far as kubectl is concerned.

[0] https://github.com/coreos/dex/blob/master/cmd/example-app/main.go

jefferai commented 7 years ago

From my perspective I don't particularly about where the token comes from; building the resource owner flow into a server-based backend is possible but seems like it's best as part of a larger oauth2/oidc effort to not duplicate work/code, and I've had too much experience with oauth2 to ever think a generic oauth2 backend can ever really work unless we add in a redirect_uri capability. Which is a separate conversation.

Having that be something that is a particular auth-method added to the CLI is possible -- several of the backends actually share a single CLI helper and how it's invoked triggers some minor behavioral differences, but that could be a way to work with different providers where the end result is to get an OIDC or OAuth2 token. The key downside with it being CLI specific is that you don't have a way to set any required API variables (IDs, secrets, etc.) in a way that hides it from the user, if needed, although since in the discussed flow they're providing a password it may be possible to simply spread those variables around to client machines without worrying too much (they'd still not be public). But keeping those variables actually secret requires a way to set them into the backend.

It doesn't seem like (unless I'm missing it) that anyone is insisting that fetching a token must be done by the backend itself. So if we can put that aside for the moment, it just comes down to: can a backend consume only OIDC ID tokens, or can it consume OAuth2 bearer tokens + userinfo endpoint. Either way the end data is the same (in theory). It doesn't seem all that onerous to do the latter, just a simple, single HTTP request.

So here are my questions to everyone:

1) Does anyone think that the token fetching flow happening outside the backend -- via a CLI helper, or a plain 'ol shell script -- is a problem?

2) Does anyone have any reason other than ideological why the backend shouldn't accept OAuth2 token + userinfo URL in addition to OIDC?

Remember, the perfect is the enemy of the good. I understand (and am fully aware of, and have already been aware of) arguments about why OIDC tokens are better than OAuth2 + userinfo, but while Vault is opinionated it's not dogmatic. It tries to enable useful flows that meet varied needs. And from a maintainability perspective fewer backends is always better than more.

bkrodgers commented 7 years ago

Personally I'd like to see token fetching stay inside the Vault CLI, and not require distributing and having users configure an Oauth endpoint, client ID, and secret before being able to start using Vault. That was the motivation for putting it in the backend. I've got hundreds of developers who will be using Vault. I'd like to just be able to say "download the vault binary, set VAULT_ADDR, run vault auth -method oauth2 and that's it.

jefferai commented 7 years ago

@bkrodgers How would you avoid that if the token fetching was in the Vault CLI?

jefferai commented 7 years ago

@bkrodgers Also do you consider the client ID/client secret to be secret from those users? Or just secret to your org, since they also are providing their own credentials?

bkrodgers commented 7 years ago

In our implementation on the PR, the user/pass the CLI collects get passed to the backend, which calls Ping for us. So the backend config has that info. To do it client side, yes, we'd need to distribute that one way or the other, which is what I'd ideally like to avoid.

I don't necessarily consider it particularly secret. It wouldn't let them into Vault, though there's a (low) risk it can be used to create a phishing attack of some sort. Were you thinking the back end could have an unprotected endpoint to allow the CLI to fetch them? That'd work, but keeping it in the backend seems somewhat more secure.

jefferai commented 7 years ago

In our implementation on the PR, the user/pass the CLI collects get passed to the backend, which calls Ping for us.

This is not what your org has been saying on this issue, which is that the CLI fetches the tokens.

bkrodgers commented 7 years ago

I'm not sure where Mike or I have said that. The code in https://github.com/hashicorp/vault/pull/2571 is pretty clear that it doesn't. It works the same way Okta and LDAP backends work. Ask the user for their creds, pass them to the backend, backend calls auth provider.

jefferai commented 7 years ago

I'm not sure where Mike or I have said that.

https://github.com/hashicorp/vault/issues/2525#issuecomment-292585336 for example.

bkrodgers commented 7 years ago

OK, long thread. Mike was mistaken saying that. Regardless, no, it doesn't. Code has been consistent. :)

jefferai commented 7 years ago

@bkrodgers Haven't looked at any code, since this is sort of a pre-code discussion. As a result, details in the discussion are important :-)

mwitkow commented 7 years ago

Jeff, how about this as a proposal:

We implement a single OAuth2 backend. It accepts OIDC Identity tokens and Access Tokens+UserInfo. The backend is simple because it doesn't do any OAuth2 flows, the only call is a simple UserInfo fetch configured via a URL.

We make the Client simple, without any OAuth2 logic (simple single field). We document and rely on users to provide a separate bash script or program. This fits into what Eric said: a lot clients have custom logic (e.g supporting Google IAM service account flows for Access Tokens).

We can implement the Resource Owner flow in a separate client that gets an access token and passes it to a backend.

Thoughts?

On Fri, 7 Apr 2017, 21:44 Jeff Mitchell, notifications@github.com wrote:

In our implementation on the PR, the user/pass the CLI collects get passed to the backend, which calls Ping for us.

This is not what your org has been saying on this issue, which is that the CLI fetches the tokens.

β€” You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hashicorp/vault/issues/2525#issuecomment-292646930, or mute the thread https://github.com/notifications/unsubscribe-auth/AJNWox5eva_dM4S9-X3kGDkrvpFW5Cx8ks5rtqAggaJpZM4Ml9kn .

bkrodgers commented 7 years ago

Rather than distribute a helper client just to get the token, I'll probably end up forking Vault and distributing an integrated CLI with that included. I'm really trying to keep this straightforward for my users.

jefferai commented 7 years ago

Sounds drastic. Give me some time to think, please.

ericchiang commented 7 years ago

It accepts OIDC Identity tokens and Access Tokens+UserInfo.

Want to reiterate that accepting an access token means you can't use this with a public provider because you can't limit the client it was issued through. E.g. this feature won't work with Google. edit: "won't work" is a bit strong, see my comment below for an explanation.

That's fine but needs to be called out in documentation.

jefferai commented 7 years ago

@ericchiang What do you mean by "won't work with Google"? The main point of the original proposal is that you don't make calls to a third party because you're relying on the JWT signature and simply matching claims. That's not going to change if the claim information is being fetched a different way in some cases. If Google already gives you a JWT you're not going to be calling back to Google for any reason are you? Or are you implying that you'll still need/want to call the userinfo endpoint?

mwitkow commented 7 years ago

Brian, why fork vault as a whole, instead of just starting a separate contrib project that does OAuth2 flows (including Res Owner) on the client side and wraps the vault CLI (like as a single binary?)

On Fri, 7 Apr 2017, 21:57 Jeff Mitchell, notifications@github.com wrote:

Sounds drastic. Give me some time to think, please.

β€” You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hashicorp/vault/issues/2525#issuecomment-292649714, or mute the thread https://github.com/notifications/unsubscribe-auth/AJNWowhxmn2gkDWq4nzWg0c3F0aRdFutks5rtqMggaJpZM4Ml9kn .