since the complete base64 encoded URL can be quite long and the NTFS
limits are quite conservative. This hashes the URL, so the ability
to decode it is lost. base64 is still used to encode the hexdigest
before trimming it to 10 chars since base64 offers more bits per
character than the hex digest.
The resoning for hashing here is that (trivially) trimming the URL before or after
base64 isn't sufficient since name clashes will likely appear in the
first (around the domain) and last (filename) parts. However the middle
part is not necessarily well defined/formatted, thus it is easiest to
hash.
Theoretically it would be possible to compare the URLs of clashing files
and extract the differing parts to use only them in a non- or less lossy manner.
depends on #58
during development I ran into
since the complete base64 encoded URL can be quite long and the NTFS limits are quite conservative. This hashes the URL, so the ability to decode it is lost. base64 is still used to encode the hexdigest before trimming it to 10 chars since base64 offers more bits per character than the hex digest.
The resoning for hashing here is that (trivially) trimming the URL before or after base64 isn't sufficient since name clashes will likely appear in the first (around the domain) and last (filename) parts. However the middle part is not necessarily well defined/formatted, thus it is easiest to hash.
Theoretically it would be possible to compare the URLs of clashing files and extract the differing parts to use only them in a non- or less lossy manner.