Closed nnethercote closed 2 years ago
Now for NLL. According to perf.rust-lang.org, an "Nll" build of keccak-clean
has a max-rss
of 1239MB. Here's a Massif profile of the heap usage:
The 500MB Liveness
spike is still visible, but it is outweighed by a later plateau that is dominated by three allocation sites that allocate 308,588,896 and 308,588,896 and 270,743,088 bytes respectively.
The three allocation sites are here: https://github.com/rust-lang/rust/blob/28bcffead74d5e17c6cb1f7de432e37f93a6b50c/src/librustc_mir/borrow_check/mod.rs#L171-L197
Each do_dataflow()
call ends up here:
https://github.com/rust-lang/rust/blob/28bcffead74d5e17c6cb1f7de432e37f93a6b50c/src/librustc_mir/dataflow/mod.rs#L710
In each case num_blocks
is 25,994, and bits_per_block
is 94,972 in the first two and 83,308 in the third.
I tried changing on_entry_sets
to a HybridIdxSet
but it didn't help, because it turns out these bitsets end up with many bits sets, so HybridIdxSet
just switches to the dense representation anyway.
One trivial idea: it looks like flow_inits
doesn't need to be live at the same time as flow_uninits
and flow_ever_inits
. If that's right, we should be able to reduce the peak by 308MB, from 1239MB to 931MB, a 25% reduction.
@nikomatsakis: any other thoughts here from the algorithmic side?
I have one idea to improve this: Users is a triple contains two u32s and a bool, which means that it is 96 bytes even though it only contains 65 bytes of data. If we split it up so we have 3 vectors instead of a vector of triples,
I have implemented this in #54211.
One trivial idea: it looks like
flow_inits
doesn't need to be live at the same time asflow_uninits
andflow_ever_inits
. If that's right, we should be able to reduce the peak by 308MB, from 1239MB to 931MB, a 25% reduction.
I have implemented this in #54213.
54420 improves the non-NLL case some more.
Because of this, the NLL:non-NLL ratio for max-rss
has worsened, to 269%, i.e. NLL uses 2.69x more memory.
@nnethercote two questions:
I guess this answers my question:
In each case num_blocks is 25,994, and bits_per_block is 94,972 in the first two and 83,308 in the third.
@nikomatsakis: I have run out of ideas on this one.
If it helps, here is what the flow_uninits
looks like, where each line represents a row (i.e. a BB) and shows the number of set bits plus the total number of bits (which is rounded up to the nearest multiple of 64, because BitSet
uses 64-bit words):
94971 / 94976
94968 / 94976
94968 / 94976
94966 / 94976
94964 / 94976
...
49547 / 94976
49543 / 94976
49542 / 94976
49540 / 94976
49537 / 94976
In other words, it is 25994 x 94976 bits (308.6MB), and the rows start off almost entirely set, and by the end drop down to about half set. About 75% of the bits are set.
And here's what flow_ever_inits
looks like:
1 / 83328
79728 / 83328
5 / 83328
8 / 83328
9 / 83328
...
64142 / 83328
64146 / 83328
64148 / 83328
64151 / 83328
64155 / 83328
It is 25994 x 83328 bits (270.8MB). Apart from the second row, the rows start of almost empty and get fuller until they are 77% full by the end. About 38% of the bits are set.
I didn't look at flow_inits
because its lifetime is separate and so it's no longer part of the memory peak.
I can't see how to represent this data more compactly, and I don't understand the algorithm in enough detail to know if less data could be stored. I also looked into separating the lifetimes of the two structures but they are used in tandem, as far as I can tell.
Discussed with @nikomatsakis during triage of NLL issues.
We decided that the memory usage on this case should not block NLL's inclusion in RC2.
In terms of whether to put this on the Release milestone or not, we decided that it would be a better idea, at least in the short-to-middle term, to focus effort more on Polonius, since that component might end up replacing the dataflow entirely, and thus the pay-off from optimizing rustc_mir::dataflow
may not be so great.
So, tagging as NLL-deferred, with the intention of revisiting after we've learned more about what we plan to do with Polonius, if anything.
NLL triage. P-medium. WG-compiler-performance.
Unsure if this is still relevant. But retagging wg- label to wg-nll
According to perf.rust-lang.org, a "Clean" build of
keccak-check
has amax-rss
of 637 MB. Here's a Massif profile of the heap memory usage.The spike is due to a single allocation of 500,363,244 bytes here: https://github.com/rust-lang/rust/blob/28bcffead74d5e17c6cb1f7de432e37f93a6b50c/src/librustc/middle/liveness.rs#L601 Each vector element is a
Users
, which is a three field struct taking up 12 bytes.num_live_nodes
is 16,371, andnum_vars
is 2,547, and 12 16,371 2,547 = 500,363,244.I have one idea to improve this:
Users
is a triple contains twou32
s and abool
, which means that it is 96 bytes even though it only contains 65 bytes of data. If we split it up so we have 3 vectors instead of a vector of triples, we'd end up with 4 16,371 2,547 + 4 16,371 2,547 + 1 16,371 2,547 = 375,272,433, which is a reduction of 125,090,811 bytes. This would getmax-rss
down from 637MB to 512MB, a reduction of 20%.Alternatively, if we packed the
bool
s into a bitset we could get it down to 338,787,613 bytes, which is a reduction of 161,575,631 bytes. This would getmax-rss
down from 637MB to 476MB, a reduction of 25%. But it might slow things down... depends if the improved locality is outweighed by the extra instructions needs for bit manipulations.@nikomatsakis: do you have any ideas for improving this on the algorithmic side? Is this dense
num_live_nodes * num_vars
representation avoidable?