Closed bovee closed 6 years ago
This is awesome!
Build is failing because I'm doing cargo fmt -- --check
, but the recommended formatting is silly for this case...
How about using lazy_static!
? The example seems like what you want.
Criterion is super cool =]
Since this is a small project, it's easier to test new crates (like criterion), so I'm pretty happy I spent some time writing the basic implementation (and already got two PRs, yay!)
Thanks! This was mostly a not-serious "do lookup tables improve perf?" PR (I was really hoping LLVM would be smart here, but it looks like lookup tables are still best).
I can definitely fix up to use lazy_static
over the next few days (been meaning to look at it for a while) and I have a couple ideas for reintroducing panic
s to the functions (which I continue to think are a good idea). I think that'll probably make fmt
happy too?
Okay, I did a rewrite using lazy_static
which is much easier to read. I also added in 1
values for all non-ACGTN bases and panic
if we ever see that (so the behavior should be identical to the existing behavior).
The one, minor issue is that perf regresses slightly from the precomputed lookup above:
nthash/nthash_iterator time: [68.371 us 68.823 us 69.460 us]
change: [-68.236% -67.967% -67.718%] (p = 0.00 < 0.05)
Performance has improved.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severe
nthash/nthash_simple time: [234.90 us 235.83 us 236.92 us]
change: [-70.306% -69.993% -69.664%] (p = 0.00 < 0.05)
Performance has improved.
Found 13 outliers among 100 measurements (13.00%)
1 (1.00%) low mild
7 (7.00%) high mild
5 (5.00%) high severe
I think this is probably good to merge as is, but it would be nice to use a const fn
at some point (maybe when that feature hits stable) to pregenerate the table during compile time for that extra ~10% boost we see above.
For ref, this is the numbers I see with the C++ ntHash:
./nt_bench
2018-09-14 19:15:12
Running ./nt_bench
Run on (32 X 2800.06 MHz CPU s)
CPU Caches:
L1 Data 32K (x32)
L1 Instruction 32K (x32)
L2 Unified 256K (x32)
L3 Unified 25600K (x32)
--------------------------------------------------
Benchmark Time CPU Iterations
--------------------------------------------------
BM_nthash 36444 ns 36443 ns 19454
(code in https://github.com/luizirber/nthash_perf)
So, I'm fine with merging this with lazy_static
(because the code is clearer), but with the previous array we got pretty close to the original implementation.
@bovee @luizirber just in case y'all are interested: I saw this PR and was also surprised that rust/LLVM wasn't generating optimal code for such a simple match
. I dug into the generated LLVM and assembly IR extensively here: https://kevinlynagh.com/notes/match-vs-lookup/
Summary:
Match is slower than lookup because more instructions are generated and the loop doesn't unroll as much
The overhead of accessing a lazy_static is actually significant in such a tight loop; may want to write out the lookup table in full to avoid this overhead. (Or wait until Rust 1.33, where let
is allowed in const expressions)
@lynaghk this is really interesting and much stranger than I thought when I threw together this PR; thanks for writing it up!
@lynaghk thank you for showing up and sharing your thoughful post! That's an amazing deep dive into the problem!
(quick nitpick: the date in your post is "2018/01/22", and I was thinking "I don't think nthash is that old!" =P)
Good eye Luiz, just fixed that date = ) Thanks
On Wed, Jan 23, 2019 at 11:06 AM Luiz Irber notifications@github.com wrote:
@lynaghk https://github.com/lynaghk thank you for showing up and sharing your thoughful post! That's an amazing deep dive into the problem!
(quick nitpick: the date in your post is "2018/01/22", and I was thinking "I don't think nthash is that old!" =P)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/luizirber/nthash/pull/2#issuecomment-456926998, or mute the thread https://github.com/notifications/unsubscribe-auth/AAJBz1fw-zjPuCMrQbSdltY0BzJLNXiiks5vGLLIgaJpZM4WuSce .
-- Kevin Lynagh
https://kevinlynagh.com https://kevinlynagh.com1.888.502.1042
Match is slower than lookup because more instructions are generated and the loop doesn't unroll as much.
Just for completeness, that is only a half-truth. The bigger problem is that the match statement gets compiled into a number of cmp/jmps which the CPU has a hard time predicting. The constant rollbacks stall execution.
Inspired by this comment. It's kind of gross and doesn't
panic!
on a non-ACGTN nucleotide, but it does give a fairly substantial speed-up.Two notes:
panic!
/non-nucleotide check?const fn
; I'm not sure what the current status of those is though.Also, I really love the
bench
library you're using here: