Closed jmsmkn closed 1 week ago
@giovannicimolin @johnraz This ended up being a bigger change than expected but this gives the project both a performance boost and solves the issue of invalidated tokens. It's my first contribution here so happy to address any feedback.
@jmsmkn Adding this to my review queue for next week. Thanks for the contribution and for the bug fix! :rocket:
@jmsmkn Hey! I reviewed your PR and it's mostly looking good, but I wasn't able to properly test it because there's a few things messed up in the library right now. I'll get back to this and merge it as soon as I can figure out what's wrong.
This would be great to add in as token invalidation is what's stopping me from upgrading to v5. Is this fix still viable?
Echoing @Fireclunge 's comment, the invalidation of existing tokens is a non-starter for us when upgrading, unfortunately.
tldr; it seems like make_hex_compatible
from #272 is the root of the backwards incompatibility problem. A small tweak to this function would make it possible to upgrade from 4.2 -> 5+ without invalidating previously generated tokens! See below for the details and the proposed solution.
I agree with @jmsmkn 's assessment that token_key
can be removed and the authentication logic can be simplified with a direct digest
lookup. I want ahead and tested this upgrade path locally and still found that my pre-existing tokens (generated using 4.2.0
) where considered invalid. It seems the root of the problem lies with the introduction of token prefixes in #272 - in particular, the make_hex_compatible()
function makes a fundamental change about how tokens are hashed, leading to backwards incompatibility.
Before #272 , this is what hash_token looked like:
def hash_token(token: str) -> str:
"""
Calculates the hash of a token.
Token must contain an even number of hex digits or
a binascii.Error exception will be raised.
"""
digest = hash_func()
digest.update(binascii.unhexlify(token))
return digest.hexdigest()
The important bit is the binascii.unhexlify(token)
is a "deconstructed" bytes
representation of the token.
> token = 'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
> binascii.unhexlify(token)
b'\xd4[\x05\xeb\xc4\xbc\xe4\xf3e\xc0\x1c\x1e\x01\x00\xf0\x0c\xb079\x18M7J\xb8\xdf\xb1W\xe2}\xb8\xad\x8c'
In #272 , the addition of a token prefix resulted in an additional step of ensuring the token string is hex-compatible. This is reasonable, as tokens like example_3af32...
would fail when being converted to bytes
:
> binascii.unhexlify(f"example_{token}")
*** binascii.Error: Non-hexadecimal digit found
However, the actual implementation of make_hex_compatible()
is a bit odd in that it just converts the token str to a bytes str:
def make_hex_compatible(token: str) -> bytes:
"""
We need to make sure that the token, that is send is hex-compatible.
When a token prefix is used, we cannot guarantee that.
"""
return binascii.unhexlify(binascii.hexlify(bytes(token, 'utf-8')))
> token = 'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
> make_hex_compatible(token)
b'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
And with a token prefix present, it does work, but it's still just the bytes() version of the token str:
> token = 'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
> make_hex_compatible(f"example_{token}")
b'example_d45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
All that said, make_hex_compatible
is equivalent to:
> bytes(token, 'utf-8')
b'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
> bytes(f"example_{token}", 'utf-8')
b'example_d45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
Again, this still works, because digest.update()
will gladly hash the bytes string without issue. However, if we want to ensure version 4.2 tokens still work with token prefixes, we would have to maintain the original "deconstructed" bytes format that was being hashed. Otherwise, you'll get two different hashes for the same token!
> digest_1 = hash_func() # hashlib.sha512
> digest_2 = hash_func()
> token = 'd45b05ebc4bce4f365c01c1e0100f00cb03739184d374ab8dfb157e27db8ad8c'
> digest_1.update(binascii.unhexlify(token)) # the original hash format in v4.2
> digest_2.update(make_hex_compatible(token)) # the new hash format in v5+
> digest_1.hexdigest()
'03fd15d542ded8d635cd7939020f44b813f796ada8f7d6b84dac87fe8e04ce891fcadfb800b31ccc0c0bbe14401c57cf4a250886203cc1187cabdef45705cb5f'
> digest_2.hexdigest()
'a78ad04bb016c54be0ddce4ae54d9476fb8b8d7cee0061631e561f6f915934b91627c0b15d438afb0ee3c231b2d92365fa3cdbc405d49a3738e6851235fcb4ff'
NOTE: This of course assumes you are using the default cryptography.hazmat.primitives.hashes.SHA512
in v4.2, and then use the new default hashlib.sha512
in v5+. Because salt
is no longer used, these two algorithms should produce the same sha512 hash. Backwards compatibility is possible here!
(Sorry for the long reply, here's my proposed solution, which I think could slot nicely into this PR)
Update make_hex_compatible
as follows:
def make_hex_compatible(token: str) -> bytes:
"""
Ensure a token, which may contain a TOKEN_PREFIX, is hex-compatible before hashing.
"""
try:
# this supports tokens generated in v4.2 and any tokens which do not contain a TOKEN_PREFIX
return binascii.unhexlify(token)
except (binascii.Error, ValueError):
# if a token has a prefix, encode it so that it's hex-compatible and can be hashed
return binascii.hexlify(token.encode('utf-8'))
Adding this check should be lightweight performance-wise and it maintains backwards compatibility for existing tokens (while supporting newer tokens that might use the token_prefix option).
@giovannicimolin let me know what you think, happy to help support this effort in this PR (if @jmsmkn is interested?) or break it out separately. Also happy to help write tests and whatnot.
Great find, @mr-niche! I'm unsure of the best way to proceed with this. Even though I was given the old "PRs welcome" treatment when I first raised the backward compatibility issue, it's been two months since I submitted a PR, and I haven't received any feedback. I spent a weekend working on improving the library, but without input from the maintainers, it's hard to move forward. If you'd like to make a PR to my branch to consolidate our efforts, I'd be happy to merge it. However, we ultimately need some guidance from the maintainers to get this resolved.
Thanks @jmsmkn ! I'll give @giovannicimolin some time to weigh in on how to proceed. If we get a thumbs up, I'll go ahead and submit a PR to your branch and we can get this ball rolling again. I'll also include some documentation around "Upgrading from v4 -> v5" so that folks who have raised this concern before (i.e. in #356 ) can see if the upgrade path is viable for their use case.
@jmsmkn @mr-niche Thanks for the contributions here so far, this is a really valuable contribution to the project.
In the last few weeks I haven't had enough time to follow-up on this project, I don't have a lot of bandwidth for this. I'll try to catch up before the end of the week.
@mr-niche Thanks for the great in-depth investigation of the issue! I think it'll be great if we can get this shipped soon.
@jmsmkn Can you incorporate @mr-niche's changes into your PR and resolve the PR conflicts?
Let's make this the 5.1 version - lots of folks will be happy with the library being backwards compatible.
Thanks @giovannicimolin ! @jmsmkn , I'll take a stab at adding this to your PR, I have a little bit of time today.
Thanks @giovannicimolin ! I have fixed the conflicts. One question: we could still keep around token_key
as people may be using it as an identifier, even if it is no longer used internally. Let me know what you think.
@jmsmkn That makes sense, can you keep it around for now?
I'll take some time to test it out tomorrow. :)
@giovannicimolin my changes are in this PR (to be merged into @jmsmkn 's PR) https://github.com/jmsmkn/django-rest-knox/pull/1
We need some guidance on one last consideration: if we switch back to the original unhexlify
strategy, we will be breaking compatibility with any tokens generated in the 5.0.* versions, unfortunately.
I'm not sure what the right thing to do here is, we could either:
Option 2 feels risky. But option 1 is yet another breaking change (albeit, a potentially smaller one - if folks have already upgraded to 5.0.*, they might not be as concerned about breaking changes, but I don't necessarily want to make that assumption).
Let me know what you think/if you have any other ideas here!
Thank you both for your efforts with this. I pieced together bits of your work and created a custom authentication class to allow this to work for me.
A quick summary for anyone that wants to upgrade but has doubts that it will be merged into main.
from knox.auth import TokenAuthentication
class TokenAuthenticationOverride(TokenAuthentication):
"""
Extension of the TokenAuthentication class
"""
def _legacy_make_hex_compatible(self, token: str) -> bytes:
"""
Ensure a token, which may contain a TOKEN_PREFIX, is hex-compatible before hashing.
Reduce this down to return binascii.unhexlify(token) if you did not use a prefix
"""
try:
# this supports tokens generated in v4.2 and any tokens which do not contain a TOKEN_PREFIX
return binascii.unhexlify(token)
except (binascii.Error, ValueError):
# if a token has a prefix, encode it so that it's hex-compatible and can be hashed
return binascii.hexlify(token.encode('utf-8'))
def _authenticate_legacy_credentials(self, token):
# Allows backward compatability with 4.2.0 tokens
# Tokens that have expired will be deleted and skipped
msg = _('Invalid token.')
token = token.decode("utf-8")
try:
digest = hash_func()
digest.update(self._legacy_make_hex_compatible(token))
digest = digest.hexdigest()
except (TypeError, binascii.Error):
raise exceptions.AuthenticationFailed(msg)
try:
auth_token = get_token_model().objects.get(digest=digest)
except get_token_model().DoesNotExist:
raise exceptions.AuthenticationFailed(msg)
if self._cleanup_token(auth_token):
raise exceptions.AuthenticationFailed(msg)
if knox_settings.AUTO_REFRESH and auth_token.expiry:
self.renew_token(auth_token)
return self.validate_user(auth_token)
def authenticate_credentials(self, token):
# Allows backward compatability with 4.2.0 tokens
# https://github.com/jazzband/django-rest-knox/pull/362/files
# Tokens that have expired will be deleted and skipped
try:
# Use new method first - All new tokens after upgrade will use this
return super().authenticate_credentials(token) # Knox 5.0.0+ tokens
except AuthenticationFailed:
return self._authenticate_legacy_credentials(token) # Old 4.2.0 tokens
I was able to reduce make_hex_compatible(token) to binascii.unhexlify(token) in my application to remove the need for the extra function. The idea is to deprecate 4.2.0 tokens when an acceptable level of users are using the new tokens. Unfortunately there will be a small performance hit while this is churning through.
I would suggest that this thread should be more visible and not be closed, but it doesn't appear like many people actually seem to care about this? 🫠😂 It's a huge L for the library IMO - Just casually logging out all your users, no big deal. Absolutely nightmarish for B2C websites with a large volume of users
Sweet, thanks for following up on this @Fireclunge ! This is a nice option for rolling forward - the performance hit is a bummer, but I'm glad you found an option that works for you.
@giovannicimolin , if this work isn't going to be supported/merged, I think the snippet above should be visible or linked in the README or CHANGELOG to highlight this as an "upgrade path" for 4.2 -> 5.
Additionally, the performance improvement @jmsmkn suggested (removing the salt
loop) is still very relevant to the library, regardless of the make_hex_compatible
work, and I think it should be prioritized. When you get a chance, can you provide an update on the support we can expect for this library? Is there anything the community can do to help move these things forward?
Thanks!
This PR adds a test that ensures old tokens continue to work by removing
token_key
. See discussion in #358 and #356.When each token had its own salt
hash_token
needed to be done per token withhash_token(token, auth_token.salt)
.token_key
was introduced in 913336d5ec81564a1fed7f7857bb8183802601fa so that the comparison could then be done on a smaller number of tokens.However, the per-token salt was removed in 51a5204ba24494e5779467efa7172a942afcb0d1. That means that we can now move
hash_token
out of the filter loop ofauthenticate
as we no longer have to passauth_token.salt
, we only pass thetoken
sent by the user. That gives us thedigest
straight away, meaning that we can do a direct lookup ondigest
(the primary key, so unique) to find the relevant token.This also has the benefit of improving performance as multiple hashes and comparisons no longer need to be made, and solves the problem of tokens being invalidated between versions 4.2.0 -> 5.0.0. Additionally,
MAXIMUM_TOKEN_PREFIX_LENGTH
is no longer bound byTOKEN_KEY_LENGTH
so could be increased if you wish.