Open ctb opened 1 month ago
I used git bisect to confirm that it's #302 that causes the big bump. Still working to figure out why 😅
Ah-hah! Figured it out! I'll write it up in more detail, but KmerMinHash.downsample_scaled(...)
executes the downsampling & creation of a new sketch even when it is not needed (e.g. if the scaled is already ok), and it was being run on every sketch!
(Fixed in #464)
updated benchmarks as of v0.9.8 here: https://github.com/sourmash-bio/sourmash_plugin_branchwater/issues/479. Loading via .sig.zip and manifest CSVs seems to resolve the slowdown since v0.8.6 too. 🎉
When benchmarking https://github.com/sourmash-bio/sourmash_plugin_branchwater/pull/430, I noticed a massive slowdown in manysearch. I took a few days to track it down using our conda-forge releases and https://github.com/dib-lab/2022-branchwater-benchmarking.
See results in
a_vs_d.txt
underbenchmarks/
for the various branches of the benchmark repo - summarized below:--ignore-abundance
I think the max memory is variable because it depends on loading order and duration of the time the sketches are held in memory.
The bigger concern, of course, is the 10x increase in time that happened with release v0.9.6.
The culprit is #302, whichs add abundance-weighted information to manysearch and introduced a performance regression around sketch downsampling. #464 fixes the regression and also makes abundance calculations user-toggleable.
On a separate note, there has been a regular increase in execution time over the releases - from v0.8.6 to v0.9.0, and again between v0.9.1 and v0.9.5.
I think the first of these two slowdowns might have to do with the switch to use core manifest loading, and in particular the need to read .sig files when using pathlists - once to build a manifest, and then again when actually loading the sketch for search. This might be something that we can fix when dealing with https://github.com/sourmash-bio/sourmash_plugin_branchwater/pull/445.
Not sure what changed between v0.9.1 and v0.9.5, though. Maybe I should run some more benchmarks to nail down the exact release where the time increase occurred. Or maybe we should just profile the heck out of the plugin. Or both.