Closed wlandau closed 7 months ago
A couple minor notes:
targets
version 2.0.0. According to semantic versioning best practice, a change in major version should be used for breaking changes. It is debatable whether changing hashes actually a breaking change since the code will still work, but there have been so many improvements since 1.0.0 that there is a case to bump the major version.tar_make()
, tar_make_future()
, and tar_make_clustermq()
should prompt the user with a dialogue explaining the change in hashes and giving them the opportunity to downgrade targets
to keep the pipeline up to date.Also:
vdigest64()
, vdigest64_file()
, and vdigest32()
always have length 1. In other words, targets
always supplies scalars to these functions. That means targets
can safely discard the vectorization features in functions like digest_obj64()
.targets
away from 32-bit hashes.I have a couple of questions related to this:
qs
or any other serialization format, and do those need to be handled differently?
arrow
's parquet files written from R, for instance, contain in their metatdata R-serialized copies of the data frame attributes (https://github.com/apache/arrow/blob/9ca7d787402c715ee84c1bb21cfca0e54ae2f12d/r/R/metadata.R#L19-L33), so this includes R version and locale information and hashes are not portable across machines.I'm not sure if this would help with local files created by the targets in the pipeline. RDS is the only file format where I would expect headers to be in a place for secretbase
to detect. Hash stability seems like an issue important to qs
, although I have not tested the outcomes on different machines.
This helps for in-memory R objects only - these are always implicitly serialized before they can be hashed.
For files, these are hashed as-is - it's a binary blob. So if the same file is moved to a different machine, it will hash the same. The actual serialization method used to generate the files will determine if the same files are produced on different machines. I hope that makes sense.
This makes sense, but an area of ambiguity is RDS targets, not generated by format="file"
but regular R objects saved in the local or remote store. If they are generated by two machines, identical as R objects but under different locales, can they be moved between machines and reused because they have the same hash? Can hashing the target as saved on-disk involve skipping the header if we are aware that it is RDS format?
As far as I'm aware that shouldn't be a problem. You should be able to move RDS files to a different machine with different R version and locale and get identical R objects when loaded (assuming both support R serialization version 3 i.e. R >= 3.5).
Ah, but while the R objects loaded are identical, I believe targets is hashing the RDS files after writing and before reading.
If the same object, identical on 2 different computers with different locales are both saved as RDS files and these files are hashed, then the hashes will be different.
The R language only guarantees that a round trip serialization and unserialization gives you identical objects regardless of where this is done. Unfortunately it does not guarantee hash stability.
I'm not sure if this would help with local files created by the targets in the pipeline. RDS is the only file format where I would expect headers to be in a place for
secretbase
to detect. Hash stability seems like an issue important toqs
, although I have not tested the outcomes on different machines.
@wlandau so you're clear on this, the underlying functions called for files and objects are completely separate - files don't go through the R serialization mechanism, so {secretbase} wouldn't 'detect' anything if hashing a file.
It might be possible to stream unserialize hash an RDS file (if we know it's an RDS file), but I'm not certain and it'd be quite some effort!
@wlandau in case you were wondering, the PR https://github.com/shikokuchuo/secretbase/pull/5 was ultimately not merged. After further research, I think it would be worth the while to implement another hash, given we'll be breaking anyway. My original motivation for trying XXH64 was to make it non-breaking if you remember.
Thanks @shikokuchuo for chiming in here.
@noamross, I believe files moved from one machine to another should still have the same hashes. After #1244 is fixed, a pipeline moved to a different machine should stay up to date. If the contents of e.g. arrow
files are dependent on locale, that only comes into play when a target reruns for other reasons. The hash of new file will be different, but that in itself will not have caused the target to rerun in the first place.
@wlandau Thanks. I see that moving files and and metadata together from one machine to another should maintain the state of the pipeline. I'm trying to understand (and if possible, expand) the conditions under which different systems can produce byte-identical R objects and be able to compare or share them (for purposes related to the extensions/experiments I discuss in https://github.com/ropensci/targets/discussions/1232#discussioncomment-8562459). It seems it will probably require loading and re-hashing an object in memory, but I was curious if it is possible on objects serialized to disk.
@noamross as in my earlier response to @wlandau I think theoretically it would be possible to special case RDS files and pass these to our hash function attached to R unserialize (as opposed to serialize which is what we do for R in-memory objects), and skip the headers in the process. I don't know in practice if there would be any implementation obstacles. But this is a lot of effort and unless this is somehow more broadly useful, I don't think anyone will implement such functionality.
As in if this is of primary concern, then probably save the files as some other invariant format.
Indeed, I was just thinking I should test how this all ends up working with qs
(which I think has built-in xxhash).
Reverting back to the issue, shikokuchuo/secretbase#6 is now ready. This implements SipHash (specifically SipHash-1-3 which is a highly-performant variant).
This is a higher quality hash than the non-cryptographic hashes such as 'xxhash', and has some security guarantees whilst still being around the same speed. Technically it is not a cryptographic hash, but a pseudorandom function. Whilst we do not need the security guarantees here (and we are using it with the fixed reference key to maximise performance), the collision resistance is guaranteed vs. something like 'xxhash' where trivial collisions have been found and the quality of the hash has been questioned.
It has also seen wide adoption e.g. as the default hash for newer Python, in Rust and various other languages. I'll invite you to try it out before merging. Feel free to post comments on the PR itself.
@wlandau SipHash is now merged into the main branch of secretbase
as I'm happy with its quality, and after testing against the reference implementation.
Awesome! I will benchmark it later this week.
As we discussed: given its deliberate focus on small data, as well as https://github.com/shikokuchuo/secretbase/discussions/8, I am having second thoughts about SipHash for targets
.
@shikokuchuo, for secretbase
, do you have plans for alternative hashes designed to be fast for large data? Otherwise, to solve this immediate issue, I am considering keeping digest
and downgrading to serialization version 2.
Or, as long as targets
needs two different hashing packages anyway, I might try xxhashlite
as @njtierney suggested in https://github.com/ropensci/targets/discussions/1212.
I'm still quite jetlagged - I forgot I was meant to be implementing SipHash anyway. From your benchmarking, the total time for file hashing is much lower than for in-memory hashing.
I still think SipHash is a good choice. It is a fast hash and I am not aware of another good quality hash that is faster. Edit: see https://github.com/shikokuchuo/secretbase/discussions/8#discussioncomment-9006238
I forgot I implemented this, but targets
calculates hashes on parallel workers if storage = "worker"
in tar_option_set()
or tar_target()
. So that makes me willing to accept a slightly slower hash. And our conversation today helped convince me that performance shouldn't nonlinearly plummet at some unknown threshold beyond 1GB.
Before switching targets
to SipHash, I think I just need to read more about the quality of xxhash vs SipHash vs cryptographic hashes. I don't take this decision lightly, and after the switch, I hope to never change the hash in targets
ever again.
From a closer look at https://eprint.iacr.org/2012/351.pdf, I see comments like:
However, essentially all standardized MACs and state-of-the-art MACs are optimized for long messages, not for short messages. Measuring long-message performance hides the overheads caused by large MAC keys, MAC initialization, large MAC block sizes, and MAC finalization.
When the authors claim SipHash is "optimized for short inputs", I think they mean it reduces this overhead. It does not necessarily mean the hash is slow for large inputs, which was originally my main concern.
But also:
We comment that SipHash is not meant to be, and (obviously) is not, collision-resistant
Maybe the authors are comparing SipHash with cryptographic hash functions, maybe not. So I am not sure how much better SipHash-1-3 is than xxhash64 with respect to collision resistance. (Although I would not expect xxhash64 to be better).
Your guess is as good as mine as to that throw-away comment. But it is designed to guard against generated collisions which are trivial for the weaker hashes https://www.youtube.com/watch?v=Vdrab3sB7MU That's why it's been adopted as the hash used by Python, Rust etc. By definition if it is a true PRF as claimed (indistinguishable from a random uniform generator), then it should not be any worse at collisions than other hashes.
@shikokuchuo discovered an issue with how
targets
is usingdigest
(either that or a bug indigest
, depending on which behaviors were intended). C.f. https://github.com/shikokuchuo/secretbase/pull/5#issuecomment-1961169664.targets
uses serialization version 3 in order to benefit from ALTREP:https://github.com/ropensci/targets/blob/0625aa28f1a1949d45df66a831cc7d9b8eadfcf7/R/utils_digest.R#L33-L42
https://github.com/ropensci/targets/blob/0625aa28f1a1949d45df66a831cc7d9b8eadfcf7/R/utils_digest.R#L46
In
digest
version 0.6.34, the default argumentskip = "auto"
does not appear to skip any of the extra headers introduced by serialization version 3. In other words, these extra serialization headers are hashed together with the contents of the R object. One of these extra headers contains the locale, which is brittle and depends on the user's computing environment. @shikokuchuo confirmed this:The upshot is that hashes in
targets
are currently not portable: if you migrate a project from one machine to another, or if you change the locale of your R session, your pipeline may no longer be up to date.This portability issue needs to be corrected. But unfortunately, a completely back-compatible patch is impossible because:
Switching to serialization version 2 would fix the problem. However, I prefer not to head in that direction because I want to take this opportunity to solve multiple problems at once. Those other problems are:
targets
uses bothsecretbase
anddigest
, which is cumbersome as far as dependencies are concerned because these packages overlap a lot. Andtargets
needssecretbase
for #1139.secretbase
appears to be faster thandigest
, at least in my own cursory testing, and it does not need the same cumbersome vectorization tricks to achieve low overhead (see benchmarks in https://github.com/shikokuchuo/secretbase/pull/5).So my current plan is to switch to
secretbase::xxh64()
(c.f. https://github.com/shikokuchuo/secretbase/pull/5) and take time to think about the 32-bit case.