Keats / jsonwebtoken

JWT lib in rust
MIT License
1.7k stars 271 forks source link

v7 discussion #76

Closed Keats closed 4 years ago

Keats commented 5 years ago

There are quite a few changes happening in the PRs: more input format for RSA (#69, #74), ECDSA signing & verification (#73) as well as some Validation changes.

I have already removed the iat check in the next branch since it isn't something that should be checked.

Right now, Validation::algorithms is a vec. I don't remember why but it shouldn't be the case, it should be an algorithm: Algorithm instead, I will accept a PR for that or do it myself later.

69 also adds a standalone verify_rsa fn, which I'm wondering if we can somehow streamline it with the rest of the crate.

Since Rust doesn't have default parameters, we always have to pass a Validation struct currently. Maybe we can put the decoding onto this struct instead so you get something like:

// currently
let token = decode::<Claims>(&token, "secret".as_ref(), &Validation::default())?;

// possible
// `JwtDecoder` has the same fields as the current `Validation`
let token = JwtDecoder::default().decode_hs256::<Claims>(&token, "secret".as_ref())?;

This way we don't have a single function where we are trying to fit all arguments inside and the user has to select explicitely which decode fn to use. This solves the vec of algorithms at the same time and allows having better documentation for each. The downside is duplication of those functions/docs for the various digest values (decode_hs256, decode_hs384, decode_hs512 for each algo).

Any other ideas/criticisms/things missing?

ccing the people involved in various issues/PRs recently @AaronFriel @jbg @Jake-Shadle @matthew-nichols-westtel @greglearns

Jake-Shadle commented 5 years ago

I only became aware of this lib yesterday because of the bug I was tracking down, and while #74 works, it's not what I would call clean.

So my one thought looking at this code only briefly was that it might be nice to be able to expose some helpers for deserializing the RSA key based on the user knowledge and passing that key down to the API rather than the current method of just trying both. The reason I say helpers would be nice instead of just passing the key directly is that would require the user adding 2 new dependencies, ring and untrusted, though another option would of course to be to expose the types and functions needed from those crates as part of the public API of this lib.

briansmith commented 5 years ago

In my future plans for ring, I expect the user will be able to create keys in ways other than reading them from byte arrays (files). For example, some users may want to generate a private key in memory and never serialize it. Or, we may support importing keys from the operating system where we never have the raw bytes of the key. And, I think once we have that functionality, people will want to use it with a library like this. So IMO it would be better to delegate all the key object creation to ring other than PEM parsing (since ring doesn't do PEM).

briansmith commented 5 years ago

In other words, rather than having a PKCS#8 parsing API in jsonwebtoken, and a "raw DER" parsing API here, i recommend instead just have the user use ring's parsing API to construct a key and pass it into jsonwebtoken.

Keats commented 5 years ago

i recommend instead just have the user use ring's parsing API to construct a key and pass it into jsonwebtoken.

That would require users having to add ring and untrusted to their list of direct dependencies, which isn't great in terms of UX. Also if we somehow decide to have an openssl backend in addition of ring it would make things complicated

briansmith commented 5 years ago

That would require users having to add ring and untrusted to their list of direct dependencies, which isn't great in terms of UX.

In ring, we're gradually phasing out the use of untrusted in the API. Maybe that will be done in ring 0.15. I don't think it's problematic for people who want to use ring to use use ring's API to construct the key used. That seems much better to me than having this crate (and every crate that uses ring) duplicate ring's API to hide the fact that ring is used.

Also if we somehow decide to have an openssl backend in addition of ring it would make things complicated

I personally think that would hurt the value proposition of the crate and would be a lot of work, so I wouldn't do it. But obviously I'm biased.

Keats commented 5 years ago

I don't think it's problematic for people who want to use ring to use use ring's API to construct the key used. That seems much better to me than having this crate (and every crate that uses ring) duplicate ring's API to hide the fact that ring is used.

Right now I need to bump the major version of this crate everytime ring gets a major version. If the symbols versioning PR is merged, I won't need to bump it all the time. However if ring is exposed, I would have to bump on every breaking change even after that.

manifest commented 5 years ago

In other words, rather than having a PKCS#8 parsing API in jsonwebtoken, and a "raw DER" parsing API here, i recommend instead just have the user use ring's parsing API to construct a key and pass it into jsonwebtoken.

@briansmith In our case we have web services written in Rust, Erlang, Ruby and Go. They need to use a same key to verify access tokens in incoming requests. If I generate keys using Ring, I'll get a keypair in pkcs8 DER format instead of just a private key that is, for instance, is used by Erlang crypto library. I will need to parse the keypair in the applications and that seems to me as not a good idea. Sadly there is currently no way to use the same pkcs8 DER key for all the applications, that's why we currently use openssl to generate PEM keys to consume by Erlang apps and pcks8 DER (from the PEM keys) for Rust apps.

Ideally, I would like to have a command line tool (because it just makes things easier for devops engineers) that allow to generate keys in a common format. Maybe at some day such a command line tool will be based on Ring project if we make it possible.

manifest commented 5 years ago

@Keats it seems that it'll take some time to implement all described features for v6. Can we have an intermediate release with ES256 support before that? Working with the branch creates a bunch of dependencies-related issues.

Keats commented 5 years ago

An intermediate release would have to be a major one sadly. However I can probably merge that PR in the v6 branch and you shouldn't have dependencies issues at that point no?

manifest commented 5 years ago

Merging into master without bumping a new version of crate won't make a difference. Dependencies issues I'm talking about exist because of some other crates are using the current version of jsonwebtoken crate.

manifest commented 5 years ago

An intermediate release would have to be a major one sadly

Maybe, we can have v6 with ES-256 support, and other features in v7? I mean the version is just a number.

Keats commented 5 years ago

Maybe, we can have v6 with ES-256 support, and other features in v7? I mean the version is just a number.Maybe, we can have v6 with ES-256 support, and other features in v7? I mean the version is just a number.

We could but it is annoying for end users to have frequent breaking changes :/

manifest commented 5 years ago

We could but it is annoying for end users to have frequent breaking changes

It always better to have releases available as soon as possible in my opinion. You always can stay on a previous major release, If you don't care about new features. It also shouldn't cause any problem if you're using simversion properly.

Keats commented 5 years ago

I don't have the time to implement that right now so let's get all the PRs in v6. That is going to be messy but at least not block people

Keats commented 5 years ago

The next version is at https://github.com/Keats/jsonwebtoken/pull/75 if people want to try it, I'm trying to get the other outstanding PRs to be merged in it.

Keats commented 5 years ago

The work on next version has started in https://github.com/Keats/jsonwebtoken/pull/91 thanks to @Jake-Shadle

The plan is to add #69 and #87 to it (so 2 more key types: Modulo and Pem) and figure out why some decoding does not work (#90, #77) by adding more tests.

Dowwie commented 5 years ago

@Jake-Shadle @Keats making this work with PEM would be really useful.. I've thus far failed to use openssl::Rsa pem-to-der helper functions to convert a public key PEM to DER and then make it work correctly with jsonwebtoken's decode

clarkezone commented 5 years ago

(nube question) how do I reference v7 in a consuming app's cargo.toml?

Keats commented 5 years ago

See https://stackoverflow.com/questions/54196846/how-to-specify-a-certain-commit-in-dependencies-in-cargo-toml for that, just point to the latest commit in the v7 branch and you should be good to go.

clarkezone commented 5 years ago

Agreed. It would be even nicer if it could load a JWK directly. I can't figure out how to supply the modulus and exponents from the json form.. base64 decoding those strings fails for me

Keats commented 5 years ago

v7 can load a RSA key from a modulus/exponent pair: https://github.com/Keats/jsonwebtoken/blob/next/tests/rsa.rs#L67-L100

clarkezone commented 5 years ago

I know but how to I easily convert from text format from a jwk:

I'm trying to convert n and e from their encoded form for use in jsonwebtoken

const jwk = { "alg": "RS256", "kty": "RSA", "use": "sig", "x5c": [ "MIIDAzCCAeugAwIBAgIJQjv/H0ysfmAsMA0GCSqGSIb3DQEBCwUAMB8xHTAbBgNVBAMTFGNsYXJrZXpvbmUuYXV0aDAuY29tMB4XDTE3MDgyNjIzNDUwOVoXDTMxMDUwNTIzNDUwOVowHzEdMBsGA1UEAxMUY2xhcmtlem9uZS5hdXRoMC5jb20wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDY/4U3OFFS3+QaST6/XLtIvEQx5ic+APbpaOm1g7H0ow5otrntXsRb1IFNJxBhLG7oWuwTsU+/ZwaM6R4aDCSJRLvgNXCUEEmPNAEZzQx5UiYCU7uqTlCsLIjqaKf9cWAA9KDcUipukNBYbyGOjLxIQU1pkf+HMLcQpDYTq1K4MmOBe45xz1i8I+u+R1tu56dq821kKSazIncCYjP2NsuCq3TOywAhmlOk8t5p0ESfQ1GgjS0EnbhXHD+lgfCWoRJTMWtmreA6Qv+eceAtMGeD0zykxajL/MxYya5P5ad6OGe5Acv90hr27XVAskgUMZFt6WiOOw2/OndeOb2d7m11AgMBAAGjQjBAMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYEFKIJHJKhcYoY5unvKZtzhvN4v4EZMA4GA1UdDwEB/wQEAwIChDANBgkqhkiG9w0BAQsFAAOCAQEAYN/CFEeAKB8CuV8JRlO17bmCspjoDfSj5Q+iAHiC3MSOo2pJEZCADPJ18UAKfXvA8qjpiLyhnAOb/KsFqas0Qo3v0ZHR3M+dKZZzBA9ThKif4S2N4eNdg/Kpcd6xT9nph+K78L91B18G4LJawRuzPt2If3O/vELAY7pWSNpof6Tj171HPEak/39tdiUyXs9k4qRiH4t0DZGgWIIP1e5ZsZ67leBD/+tsdqARHBSX1G7oTmU08JgUVzrFfHunEgwYo+OQa5g87TRi170gCigu8q3OAV4uHrKGp8XdCPVCyE/ZGZCVIefd22nU+cd28o0WKvkFmieBKFLwV0VTItgitg==" ], "n": "2P-FNzhRUt_kGkk-v1y7SLxEMeYnPgD26WjptYOx9KMOaLa57V7EW9SBTScQYSxu6FrsE7FPv2cGjOkeGgwkiUS74DVwlBBJjzQBGc0MeVImAlO7qk5QrCyI6min_XFgAPSg3FIqbpDQWG8hjoy8SEFNaZH_hzC3EKQ2E6tSuDJjgXuOcc9YvCPrvkdbbuenavNtZCkmsyJ3AmIz9jbLgqt0zssAIZpTpPLeadBEn0NRoI0tBJ24Vxw_pYHwlqESUzFrZq3gOkL_nnHgLTBng9M8pMWoy_zMWMmuT-WnejhnuQHL_dIa9u11QLJIFDGRbelojjsNvzp3Xjm9ne5tdQ", "e": "AQAB", "kid": "N0M5NDA5RkI4NjY1QUVERTlDMzE4MkEyRjA4QThCOTI2NTQzNzhFNw", "x5t": "N0M5NDA5RkI4NjY1QUVERTlDMzE4MkEyRjA4QThCOTI2NTQzNzhFNw" }

Dowwie commented 5 years ago

v7 can load a RSA key from a modulus/exponent pair: https://github.com/Keats/jsonwebtoken/blob/next/tests/rsa.rs#L67-L100

@Keats any interest in also supporting PEM files?

Keats commented 5 years ago

Yes that's the main blocker for the v7 release: https://github.com/Keats/jsonwebtoken/issues/77#issuecomment-520248623 (details from that comment and below).

Keats commented 4 years ago

An alpha version has been released: https://docs.rs/jsonwebtoken/7.0.0-alpha.1/jsonwebtoken/index.html

If anyone has some time to try it and give feedback it would be great. The main change is that the encode/decode functions now take the .pem instead of the der format, it can decode using RSA pub key components and adds RSA PSS.

cc @Jake-Shadle It's not using the Key enum anymore but it should be working well for modulus/exponent. The only catch is that right now it takes &str for them since they are usually b64 encoded and coming from some JSON data.

rib commented 4 years ago

Hi @Keats - thanks for the version 7 update! I've been look at using this for verifying access and id tokens from Amazon Cognito and in particular the fact that it's now possible to directly pass the RSA exponent and modulus values for the key has been really helpfull.

One thing I have got stuck with though is being able to efficiently handle additional validation of the claims beyond what's handled by Validation.

I'm making a minimal crate that handles fetching keys from https://cognito-idp.{region}.amazonaws.com/{userPoolId}/.well-known/jwks.json and checks claims like iss and aud are set as appropriate for Cognito tokens.

I currently have a KeySet::decode<T>(self, token, validator) -> T api similar to jsonwebtoken's decode api whereby someone else is responsible for defining their own Claim type and I have helpers to set up a Validation struct depending on whether you're validating a Cognito Id token or access token.

In this case there are some non-standard "client_id" and "token_use" claims I want to validate, but because I'm acting as an intermediary that's not responsible for the final claims struct/type that's used I don't have an efficient way to do additional validation after jsonwebtoken does its validation.

Since it's possible for administrators of an AWS Cognito user pool to extend the claims I'm hoping to keep it generic.

It could be good if the claims: Map<String, Value> that's created as part of validation weren't discarded so that further validation of claims would be straightforward without needing to repeat the base64 decode -> deserialize multiple times. At least in my situation I think it would be good if this map could be added to the returned TokenData and wonder if you'd consider a change like that?

Sorry if that's a bit too off topic for here; I can open another issue if it makes sense.

Thanks again!

Dowwie commented 4 years ago

@rib @Keats It would be nice to turn validation into a behavior exposed as a trait, such as trait Validator<C> and allow for anyone to write a custom type that impl's a trait Validator<C> where C is the custom claim type (or something along these lines?). This ought to facilitate what @rib and others will try to accomplish.

@rib note that @Keats is maintaining this library as a courtesy but isn't currently using it in his own work so needs contributors for heavy lifting such as this. I don't have a pressing need for custom validation (yet) so won't be contributing these changes. Also, a v7 release is imminent. Any related work would contribute to a future release.

Keats commented 4 years ago

I'm not sure I want to make a trait based validation since for 99% of the users, the current approach will work. Adding the encoded claims could be useful but it would expose some serde_json internals (the Map) from the crate. You can easily add a to_value(&claims).unwrap().as_object().unwrap() to get the Map back. A bit wasteful since you re-serialize but claims in general should be small enough that it's not a bottleneck.

I'm thinking on whether to change modules/exponent to &[u8] from &str to allow more input types but I don't think I've seen JWK not in JSON format so I don't know whether it's worth changing it.

rib commented 4 years ago

It doesn't seem ideal that serializing the claims type requires that it has fields for everything you want to validate while the map has everythig contained in the JWT. To be able to get to all the original claims it really doesn't feel ideal to have to separately decode and parse the token again after it's been validated. In total this results in 2x decode and 3x deserialize - and although they are pretty tiny it still doesn't seem good.

At the moment I'm experimenting with a slightly different approach to all of this with a few different ideas:

For the modulus and exponent I was taking those from a jwk so a base64 string was perfect, but yeah maybe there are other use cases where that's not convenient.

JTKBowers commented 4 years ago

Hey everyone! I'm writing a generic JWK crate (like @rib, with the primary motivation of interacting with AWS cognito). I've been playing round with a couple of the most recent commits on this branch, so I thought I'd chime in with my 2c.

I had a few questions and ideas:

Keats commented 4 years ago

I think a decode_raw could be added if you want the Map but the decode by default should still deserialize into a struct by default where you can build additional validation on top of it if needed. Would it already work if you put a Map as the generic type? The default basic JWT should be as easy as possible.

Would it be possible to expose one decode function that takes a key enum?

That was the case in the next branch a while ago but wasn't really great from a user perspective.

Specifying the key components as base64 is convenient, but it does seem slightly better to pass them as a &[u8] from a separation of concerns point of view.

This is likely to happen.

Is there any appetite for also being able to pass elliptic curve key components to the library?

Yes, there were some work on it in a branch before realising it wasn't supported by ring so it was dropped.

rib commented 4 years ago

I think the thing I'm finding awkward with the deserializing into a struct design (and maybe similar for @JTKBowers if he's also making Cognito middlware) is that there may be multiple consumers of a deserialized token. If jsonwebtoken is considered to be a low level library for dealing with tokens (at least that's how I've been looking to use it) then there will be others that want to build middleware that specifically deals with Cognito or Azure details etc and then there will be a final user and they are the ones that will determine the concrete type which means from this pov (of wanting to incrementally add Cognito specific validaiton) we don't really get to choose to deserialize into a Map.

..hmm, that's not quite true, with your suggestion a middleware can actually get all the info it needs by passing a Map and then it can also offer the user a generic claims struct based api if it then uses ::from_value() to convert that map to the users struct. The only issue there is that in this case jsonwebtoken is going to basically deserialize from json text into two separate Maps which is completely redundant work that seems like it should be relatively easy to avoid.

I think maybe deserializing into a struct type could be handled closer to the surface of the API as an optional convenience api (maybe even the default api that documentation steers you toward) so internally it would only create the Map needed to validate all the claims. I think then it would be quite easy to expose a decode_raw api as suggested.

Atm I've gone down a bit of a rabbit hole creating an alternative api more-or-less from scratch, but since I then borrowed heavily from jsonwebtoken it's still not really a million miles away. Maybe if I get a bit further along today I'll just push what I have to a github repo and would be intersted to hear if there are any ideas that could instead make sense to bring back into jsonwebtoken.

Keats commented 4 years ago

If jsonwebtoken is considered to be a low level library for dealing with tokens (at least that's how I've been looking to use it)

That's not really the goal. The goal was to have https://pyjwt.readthedocs.io/en/latest/ in Rust where the decoded claims are in a struct instead of a hashmap. Some parts of the inner API are public (https://docs.rs/jsonwebtoken/7.0.0-alpha.1/jsonwebtoken/crypto/index.html) to ensure people can build other abstractions on it. At the same time I don't really want to make everything public otherwise I can never change anything in the library without a major version.

The main issue with a decode_raw is that it needs to be multiplied by the number of decode fns which is far from ideal. Adding back a Key enum could solve that at the cost of UX (and internally it does have an enum like that), which isn't great either.

Keats commented 4 years ago

@JTKBowers would you have the time to try a Key approach again for decoding only? There is already an internal one so it shouldn't be too hard. As for n/e type, I think <T: ?Sized + AsRef<[u8]> could work well so people can use whatever.

rib commented 4 years ago

I mentioned above I was experimenting with an alternative API, so in case it might be of interest here, I ended up making some good progress and having an initial working implementation here:

https://github.com/rib/jwt-rust/blob/master/src/lib.rs

At least initially it was heavily based on jsonwebtoken code. I ported most of jsonwebtoken's unit tests across to this API too, but haven't ported the ecdsa tests yet.

I haven't documented the api yet but an encode->validate round trip currently looks like this:

let alg = Algorithm::new_hmac(AlgorithmID::HS256, "secret").unwrap();
let header = json!({ "alg": "HS256" });
let claims = json!({
    "aud": "test",
    "exp": get_time() + 10000,
    "my_claim": "foo"
});
let token_str = encode(None, &header, &claims, &alg).await.unwrap();

let verifier = Verifier::create()
                    .with_audience("test")
                    .claim_equals("my_claim", "foo")
                    .build().unwrap();
let claims: Value = verifier.verify(&token_str, &alg).await.unwrap();

Some of the things I think are notable about this API:

The crypo state is contained in an Algorithm struct (which was a design idea borrowed from java-jwt). This way any associated base64 decoding + pem parsing can be done once up-front. Instead of having a very generic key: &[u8] it has separate constructors for different algorithms like ::new_hmac, ::new_rsa_pem_signer, ::new_rsa_pem_verifier

The Algorithm handles the lower level API for signing and verifying signatures but doesn't care about the structure of the messages themselves.

The Algorithm sign and verify apis are async and the idea is that they can abstract a key set in the future which might involve network IO, e.g. to fetch a .jwks url. (to be handled in a separate project)

The verification gains a lot of flexibility by using a builder pattern for construction, which can hopefully avoid an explosion of top-level verify apis to handle different cases.

The claim verification generalises for custom claims: .claim_equals() can match values against a constant, .claim_matches() can match against a regex while .claim_equals_one_of() and .claim_matches_one_of() let you match against a set of values or regex patterns. I think it would potentially it would be straight-forward to accept a closure too for any other quirky cases.

The simple verifier.verify() only returns the claims deserialized into whatever type you like (above it's deserialing into a serde_json Value but that could be a custom claims struct. As with jsonwebtoken that implies deserializing at least twice so there is a lower level api verify_for_time() that returns the claims Map<String,Value> that must necessarily be deserialized internally and also lets you specify the 'now' timestamp.

I'd be very interested to hear any thoughts / feedback about this design, and would be happy to see any of it be folded into jsonwebtoken if it makes sense.

Keats commented 4 years ago

Hmm this is definitely too different from the current goal. I think we can lift some ideas from it though!

  1. Expose a function to convert PEM -> DER so we only decode it once (based on PemEncodedKey but not exposing the struct to keep that internal).
  2. Add EncodingKey (secret, pem, der) and DecodingKey (secret, pem, der, n/e, ecdsa component once ring supports it) enums to encode/decode
  3. Optionally take a closure in the Validation struct to set up some custom validation where the argument will be the claims map and returns a Result<()>

I don't want this crate to do any fetching even in the future so no async in there, too many potential things people might want to control there (eg caching) to be inside a library.

Overall the current API would only change for the EncodingKey/DecodingKey but it would stay the same way otherwise.

I definitely won't have the time to work in the near future on that so it would have to come as a PR.

rib commented 4 years ago

I don't want this crate to do any fetching even in the future so no async in there, too many potential things people might want to control there (eg caching) to be inside a library.

yeah, I also wasn't ever anticipating that this implementation would ever handle fetching jwks key sets itself, but was just keeping the possibility that it would be extensible (i.e. another middleware could essentially provide it's own implementation of some Algorithm trait which could potentially wrap a key set such as a remote jwks set.

Actually I'm having second thoughts about this aspect because in practice I actually already have a separate project where I'm handling jwks fetching and it seems pretty much fine to do on top of this instead - and it's not obvious that it would be any better if it was abstracting over an Algorithm instead.

It also means I could drop the optional kid argument for the encode which is only there to possibly allow this key set abstraction.

Dowwie commented 4 years ago

@Keats I started to experiment with item 3 (validator dependency injection). The boolean option fields in the Validation type are only applicable to the default validation logic presented in the validate fn. Injecting custom validation logic changes the need for have such flags. If I'm injecting my own validation closure, I control the logic rather than rely on checking flag fields. Maybe, rather than replace the entire validation logic, use a closure to augment it. fn validate can act as a core validator and a closure may be called additionally, if one is defined.

rib commented 4 years ago

Btw, for reference, since I also had doubts about it in the end; I updated https://github.com/rib/jwt-rust/ to only have a synchronous API abandoning the idea that an Algorithm could abstract a key set in the future.

Keats commented 4 years ago

The custom validation fn would only be there to enhance the existing Validation, eg it would be called after all the other validation if it is present. We probably need to add an error type like CustomValidation and an easy way to create a jsonwebtoken error from that fn.

On Sat, 30 Nov 2019, 16:45 Darin, notifications@github.com wrote:

@Keats https://github.com/Keats I started to experiment with item #3 https://github.com/Keats/jsonwebtoken/pull/3 (validator dependency injection). The boolean option fields in the Validation type are only applicable to the default validation logic presented in the validate fn. Injecting custom validation logic changes the need for have such flags. If I'm injecting my own validation closure, I control the logic rather than rely on checking flag fields. Maybe, rather than replace the entire validation logic, use a closure to augment it. fn validate can act as a core validator and a closure may be called additionally, if one is defined.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Keats/jsonwebtoken/issues/76?email_source=notifications&email_token=AAFGDI5GIKBNGBSCB4OMGNDQWKDAJA5CNFSM4GUG7IB2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFQLGNA#issuecomment-559985460, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFGDI2TXSXFABDXOJLMV6TQWKDAJANCNFSM4GUG7IBQ .

JTKBowers commented 4 years ago

@JTKBowers would you have the time to try a Key approach again for decoding only? There is already an internal one so it shouldn't be too hard.

I'd be happy to! Let me know what I can do.

Keats commented 4 years ago

I'm thinking point 1 and 2 of https://github.com/Keats/jsonwebtoken/issues/76#issuecomment-559959232 Point 1 could be done transparently with an EncodingKey enum so users can just save that struct in lazy_static or whatever and only decode it once.

Dowwie commented 4 years ago

@Keats why not target a cutoff date for any additional enhancements so that v7 can be released this year? I think I can make time to finish work on an extend_validation api in the near future, but even this functionality is a scope creep.

Keats commented 4 years ago

The extend_validation can be added in a non-breaking way later on. The Encoding/Decoding key cannot and this is already v7, I want the API to stabilise

Dowwie commented 4 years ago

@Keats in other words, you're saying that the final blocking dependency on a v7 release is the work related to Encoding and Decoding Key? @JTKBowers

Keats commented 4 years ago

That sounds like putting a lot of pressure on @JTKBowers but essentially I want to make sure jsonwebtoken v7 is stable for at least a year in terms of API.

Encoding/Decoding keys should be pretty easy to implement but would allow the API to be stable and performant.

Dowwie commented 4 years ago

@Keats If I understand correctly, you want to capture as many of the valuable ideas now for v7 as you can because you want v7 crate to be the standard for year that follows its release. Sounds fair, but you will have to determine what is good enough to ship rather than wait indefinitely for more good ideas, right? The goal post was moved not too long ago.

Regarding extend_validation, I've been considering tradeoffs related to how to inject a Validation type with a closure. "Boxed trait object" is less visible and does not change the type signature, but not as performant as "unboxed closure". jsonwebtoken is performance-sensitive, so I prefer whichever approach is fastest (even at the cost of changing the type signature to Validation<V> where V: ExtendValidation , or something similar) https://stackoverflow.com/questions/27831944/how-do-i-store-a-closure-in-a-struct-in-rust

thoughts?

rib commented 4 years ago

I would guess that any performance concern from closure differences should be a drop in the ocean for validating a few token clams, so maybe that shouldn't be a concern? base64 decode + e.g. rsa signature validation is surely going to be much much more costly that an extra pointer dereference?

For now I think jsonwebtoken has some other lower-hanging ways to opimize it before this is really a performance trade off - I guess ergonomics should be a bigger concern for supporting a closure for validation.

Dowwie commented 4 years ago

I found that an extended_validation: Option<Box<dyn Fn(...) -> Result<()>>> field can exist within the Validation type only if ALL of the derived proc macros are removed (Debug, PartialCmp, Clone). Removing them does not seem to interfere with the library nor tests. Keeping these proc macros seems to push a solution towards the initial one I proposed, using a generic and trait Validate. If anyone can think of an alternative, please by all means share your idea.

Assuming we go with the trait object for extended_validation, I'm not really sure as to what the function signature should be for the extended_validation closure. The Claims HashMap param, undoubtedly, will be the first param, but as for a second param, my best guess is a serde::Value. This seems like the most flexible.

Thoughts?

rib commented 4 years ago

@Dowwie I'd be curious to know what you think of the validation solution I came up with in jwt-rust. It didn't expose a closure so far but it is generalised to support custom claims. The most common claims are particularly simple to check with something like:

let verifier = Verifier::create()
    .issuer("http://some-auth-service.com")
    .audience("application_id")
    .subject("subject")
    .build();

and custom claims can be handled in a number of ways like:

let verifier = Verifier::create()
    .claim_equals("my_claim0", "value") // exact match
    .claim_matches("my_claim1", "value[0-9]") // regex
    .claim_equals_one_of("my_claim2", &["value0", "value1"]) // match against a set of value
    .claim_matches_one_of("my_claim3", &[regex0, regex1]) // a set of regex matches
    .build();

Out of curiousity do you have a use case that couldn't be handled with this amount of generality?