auth0 / node-jsonwebtoken

JsonWebToken implementation for node.js http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html
MIT License
17.64k stars 1.22k forks source link

Not guarding against giant-JSON DoS attacks? #212

Closed jtlapp closed 5 years ago

jtlapp commented 8 years ago

I posted this to whitehat@auth0.com 7 days ago but never got anything but an automated response. I figure that because DoS attacks are intractable anyway, there's no harm in posting here.

Scanning through the code and glancing through the dependent module "jws", it appears to me that clients could sent monster-sized JSON objects inside JWTs, and the module will always decode the JSON, eating up both clock cycles and memory. It seems to me that there should options to guard against this. Some possible options:

(1) Allow for enforcing a size limit on JWTs (or header/payload). (2) Allow for verifying the signature before decoding JSON.

There may be another solution having to do with with_indifferent_access, mentioned on this discussion for a Ruby library, but I didn't follow it: https://github.com/nov/json-jwt

Also, I've been making a general assessment of JWTs and am so far finding them too insecure to use for anything important. Any input you can provide on StackExchange would be helpful: http://security.stackexchange.com/questions/125135/what-scenarios-really-benefit-from-signed-jwts

jfromaniello commented 8 years ago

Thanks for sharing this concern here and sorry we didn't respond on the whitehat address (I will check what happen).

I will think more about this:

(1) Allow for enforcing a size limit on JWTs (or header/payload).

This will be a great fit for this library. We can put a maxLength option on the verify function, what would be a good default for this option?

(2) Allow for verifying the signature before decoding JSON.

I am going to check this but I was under the impression that this was the case. We decode the header first, then verify the signature, then decode the payload.

jtlapp commented 8 years ago

Quick note: You have to be able to decode first, because in some applications the key may be a function of the user identified in the payload.

jfromaniello commented 8 years ago

You are right.. generally not on the "user identifier" but on the "issuer".

This library doesn't take care of that as it expects payload and the secret in the form of an string.. but there is plenty of examples doing what you suggest.

An alternative for this could be changing this api to have options.secret as function and validating the length here before calling the secret function.

jtlapp commented 8 years ago

Probably the most robust thing to do would be to validate the JSON, putting limits on the number of unquoted, unescaped colons and braces. It's really hard to come up with a single regex that is mindful of both quotes and character escapes, though. It can't be done for CSV, for example. So this sort of validation would come at more than ideal expense in clock cycles.

I think the correct solution lies in the JSON library. We should be able to set limits on the parser, such as the maximum number of objects to generate.

jtlapp commented 8 years ago

If you use a streaming JSON parser, you could count objects or otherwise verify objects during the parse and abort an excessive parse.

jfromaniello commented 8 years ago

JSON.parse is quite limited, but it has an optional reviver function. Example:

const propertiesToParse = {
   issuer: {
      isValid: value => typeof value === 'string' && value.length < 2000
   }
};

const reviver = (key, value) => {
  const validator = propertiesToParse[key];
  if (!validator) {
     //skip this property since is not whitelisted
     return undefined;
  }
  if (!validator.isValid(value)) {
     //skip this property since is not valid according to the validator
     return undefined;
  }

  //this property is whitelisted and is valid;
  return value;
};

//test payload
const json = JSON.stringify({issuer: 'foo', foo: 'bar'});

console.log(JSON.parse(json, reviver));

//output: 
//> { issuer: 'foo' }
jlitowitz commented 7 years ago

How about something like this? In verify.js, immediately after

if (!jwtString){
    return done(new JsonWebTokenError('jwt must be provided'));
  }

add something like:

if(options.maxLength && options.maxLength > 0 && jwtString.length > options.maxLength) {
    return done(new JsonWebTokenError('jwt too long'));
}

This way it'll be up to the developer of any given application to determine what is a sensibly long encoded JWT string, with reverse compatibility for existing code? Just a thought.

panva commented 5 years ago

Huge request body sizes in node.js are usually dealt with either at the TLS-terminating proxy such as Nginx, Apache or Caddy or at the request body parser level. Also, closing as stale.

jtlapp commented 5 years ago

It's not the absolute size of the request that's the problem. It's the fact that a JSON request consumes a lot of clock cycles to parse. A parser should be able to handle anything thrown at it, as the programmer intends, but a web server should be more discriminating about what it processes.

jtlapp commented 5 years ago

Especially when the upper bound on size is knowably small. The server stack could do a lot more to protect itself.

panva commented 5 years ago

Going off the title here

Not guarding against giant-JSON DoS attacks?

Something "giant" has to get through a webserver and bodyparser first. Note that most body parsers for the node ecosystem (either express or koa) have defaults set to 100kb or 56kb

Javascript's JSON.parse cannot be passed any options to parse "with a limit" and not sending arbitrary limit sized JWTs to verify can easily be done on the application level.