I noticed that for large tokens (not huge, just a couple dozen kb), especially for JWS, the decode function was very slow, and it looks like most of the runtime (90% in my case) was spent by the regex. While large tokens are probably not common, I thought I'd make it faster.
This is some benchmarking code:
my $data = "The rain in Spain stays mainly in the plain." x 10000;
my $key = '-----BEGIN PRIVATE KEY-----
MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgYirTZSx+5O8Y6tlG
cka6W6btJiocdrdolfcukSoTEk+hRANCAAQkvPNu7Pa1GcsWU4v7ptNfqCJVq8Cx
zo0MUVPQgwJ3aJtNM1QMOQUayCrRwfklg+D/rFSUwEUqtZh7fJDiFqz3
-----END PRIVATE KEY-----';
my $token = encode_jwt(
payload => $data,
alg => 'ES256',
key => \$key,
);
cmpthese(-2, {
decode_jwt => sub {
my $out = decode_jwt(token=>$token, key=>\$key);
},
decode_jwt2 => sub {
my $out = decode_jwt2(token=>$token, key=>\$key);
},
});
Where decode_jwt2 is the version on this PR.
In this (extreme) example the overall speedup of decode is over 4x with perl 5.38.0:
I noticed that for large tokens (not huge, just a couple dozen kb), especially for JWS, the decode function was very slow, and it looks like most of the runtime (90% in my case) was spent by the regex. While large tokens are probably not common, I thought I'd make it faster.
This is some benchmarking code:
Where
decode_jwt2
is the version on this PR. In this (extreme) example the overall speedup of decode is over 4x with perl 5.38.0: