Open eseiler opened 9 months ago
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Updated (UTC) |
---|---|---|---|
hibf | ✅ Ready (Inspect) | Visit Preview | Jan 26, 2024 4:24pm |
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 99.63%. Comparing base (
dbbfb3d
) to head (abad908
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I thought we did this deliberately because it is faster if kmers doesn't have to reallocate memory Everytime?
Usually yes, in this particular case we wanted to reduce the memory consumption, because we do the recursion afterwards. We didn't want to keep the kmers around, because they are not needed anymore. And clear doesn't deallocate.
I can also run raptor with and without that change and check how it affects timings and ram.
I think this change needs at least one benchmark on refseq to check if we don't downgrade the performance too much.
I think this change needs at least one benchmark on refseq to check if we don't downgrade the performance too much.
Yes, I'll run refseq with and without. I'd figure this is more or less i/o limited, but let's see :)
Documentation preview available at https://docs.seqan.de/preview/seqan/hibf/192
on 40k Refseq genomes with tmax 256 there is no change in RAM usage with this patch
on 40k Refseq genomes with tmax 256 there is no change in RAM usage with this patch
I guess that makes sense because kmers
only holds the kmers of the maximum bin (i. e. a single bin)
Then the only (probably) alternative is, to read files multiple times
or use less threads :D
For this PR, we could also think about some refactoring.
For example, we could do something like
auto & ibf = hibf.ibf_vector[ibf_pos];
{
robin_hood::unordered_flat_set<uint64_t> kmers{};
auto initialise_max_bin_kmers = [&]() -> size_t
{
if (current_node.max_bin_is_merged())
{
// recursively initialize favourite child first
technical_bin_to_ibf_id[current_node.max_bin_index] =
hierarchical_build(hibf,
kmers,
current_node.children[current_node.favourite_child_idx.value()],
data,
false);
return 1;
}
else // max bin is not a merged bin
{
// we assume that the max record is at the beginning of the list of remaining records.
auto const & record = current_node.remaining_records[0];
build::compute_kmers(kmers, data, record);
build::update_user_bins(technical_bin_to_user_bin_id, record);
return record.number_of_technical_bins;
}
};
// initialize lower level IBF
size_t const max_bin_tbs = initialise_max_bin_kmers();
ibf = construct_ibf(parent_kmers, kmers, max_bin_tbs, current_node, data, is_root);
}
// parse all other children (merged bins) of the current ibf
auto loop_over_children = [&]()
{
/* ... */
};
loop_over_children();
robin_hood::unordered_flat_set<uint64_t> kmers{};
// If max bin was a merged bin, process all remaining records, otherwise the first one has already been processed
size_t const start{(current_node.max_bin_is_merged()) ? 0u : 1u};
for (size_t i = start; i < current_node.remaining_records.size(); ++i)
{
We put the kmers
into a scope for the first use. Then we do loop_over_children
.
And then we have a new kmers
set that's used for filling the current IBF.
Or something else...
The capacity remains the same after calling
clear
.I also renamed
kmers
tolocal_kmers
inloop_over_children
because it might be shadowingkmers
.