rust-lang / rust

Empowering everyone to build reliable and efficient software.
https://www.rust-lang.org
Other
98.42k stars 12.73k forks source link

High memory usage compiling `keccak` benchmark #54208

Closed nnethercote closed 2 years ago

nnethercote commented 6 years ago

According to perf.rust-lang.org, a "Clean" build of keccak-check has a max-rss of 637 MB. Here's a Massif profile of the heap memory usage. keccak-clean

The spike is due to a single allocation of 500,363,244 bytes here: https://github.com/rust-lang/rust/blob/28bcffead74d5e17c6cb1f7de432e37f93a6b50c/src/librustc/middle/liveness.rs#L601 Each vector element is a Users, which is a three field struct taking up 12 bytes. num_live_nodes is 16,371, and num_vars is 2,547, and 12 16,371 2,547 = 500,363,244.

I have one idea to improve this: Users is a triple contains two u32s and a bool, which means that it is 96 bytes even though it only contains 65 bytes of data. If we split it up so we have 3 vectors instead of a vector of triples, we'd end up with 4 16,371 2,547 + 4 16,371 2,547 + 1 16,371 2,547 = 375,272,433, which is a reduction of 125,090,811 bytes. This would get max-rss down from 637MB to 512MB, a reduction of 20%.

Alternatively, if we packed the bools into a bitset we could get it down to 338,787,613 bytes, which is a reduction of 161,575,631 bytes. This would get max-rss down from 637MB to 476MB, a reduction of 25%. But it might slow things down... depends if the improved locality is outweighed by the extra instructions needs for bit manipulations.

@nikomatsakis: do you have any ideas for improving this on the algorithmic side? Is this dense num_live_nodes * num_vars representation avoidable?

nnethercote commented 6 years ago

Now for NLL. According to perf.rust-lang.org, an "Nll" build of keccak-clean has a max-rss of 1239MB. Here's a Massif profile of the heap usage: keccak-nll The 500MB Liveness spike is still visible, but it is outweighed by a later plateau that is dominated by three allocation sites that allocate 308,588,896 and 308,588,896 and 270,743,088 bytes respectively.

The three allocation sites are here: https://github.com/rust-lang/rust/blob/28bcffead74d5e17c6cb1f7de432e37f93a6b50c/src/librustc_mir/borrow_check/mod.rs#L171-L197

Each do_dataflow() call ends up here: https://github.com/rust-lang/rust/blob/28bcffead74d5e17c6cb1f7de432e37f93a6b50c/src/librustc_mir/dataflow/mod.rs#L710 In each case num_blocks is 25,994, and bits_per_block is 94,972 in the first two and 83,308 in the third.

I tried changing on_entry_sets to a HybridIdxSet but it didn't help, because it turns out these bitsets end up with many bits sets, so HybridIdxSet just switches to the dense representation anyway.

One trivial idea: it looks like flow_inits doesn't need to be live at the same time as flow_uninits and flow_ever_inits. If that's right, we should be able to reduce the peak by 308MB, from 1239MB to 931MB, a 25% reduction.

@nikomatsakis: any other thoughts here from the algorithmic side?

nnethercote commented 6 years ago

I have one idea to improve this: Users is a triple contains two u32s and a bool, which means that it is 96 bytes even though it only contains 65 bytes of data. If we split it up so we have 3 vectors instead of a vector of triples,

I have implemented this in #54211.

nnethercote commented 6 years ago

One trivial idea: it looks like flow_inits doesn't need to be live at the same time as flow_uninits and flow_ever_inits. If that's right, we should be able to reduce the peak by 308MB, from 1239MB to 931MB, a 25% reduction.

I have implemented this in #54213.

nnethercote commented 6 years ago

54420 improves the non-NLL case some more.

nnethercote commented 6 years ago

54420 improves the non-NLL case some more.

Because of this, the NLL:non-NLL ratio for max-rss has worsened, to 269%, i.e. NLL uses 2.69x more memory.

nikomatsakis commented 6 years ago

@nnethercote two questions:

  1. Are you still investigating here?
  2. Do you know how the memory usage is distributed between the various dataflow computations?
nikomatsakis commented 6 years ago

I guess this answers my question:

In each case num_blocks is 25,994, and bits_per_block is 94,972 in the first two and 83,308 in the third.

nnethercote commented 6 years ago

@nikomatsakis: I have run out of ideas on this one.

If it helps, here is what the flow_uninits looks like, where each line represents a row (i.e. a BB) and shows the number of set bits plus the total number of bits (which is rounded up to the nearest multiple of 64, because BitSet uses 64-bit words):

94971 / 94976
94968 / 94976
94968 / 94976
94966 / 94976
94964 / 94976
...
49547 / 94976
49543 / 94976
49542 / 94976
49540 / 94976
49537 / 94976

In other words, it is 25994 x 94976 bits (308.6MB), and the rows start off almost entirely set, and by the end drop down to about half set. About 75% of the bits are set.

And here's what flow_ever_inits looks like:

1 / 83328
79728 / 83328
5 / 83328
8 / 83328
9 / 83328
...
64142 / 83328
64146 / 83328
64148 / 83328
64151 / 83328
64155 / 83328

It is 25994 x 83328 bits (270.8MB). Apart from the second row, the rows start of almost empty and get fuller until they are 77% full by the end. About 38% of the bits are set.

I didn't look at flow_inits because its lifetime is separate and so it's no longer part of the memory peak.

I can't see how to represent this data more compactly, and I don't understand the algorithm in enough detail to know if less data could be stored. I also looked into separating the lifetimes of the two structures but they are used in tandem, as far as I can tell.

pnkfelix commented 6 years ago

Discussed with @nikomatsakis during triage of NLL issues.

We decided that the memory usage on this case should not block NLL's inclusion in RC2.

In terms of whether to put this on the Release milestone or not, we decided that it would be a better idea, at least in the short-to-middle term, to focus effort more on Polonius, since that component might end up replacing the dataflow entirely, and thus the pay-off from optimizing rustc_mir::dataflow may not be so great.

So, tagging as NLL-deferred, with the intention of revisiting after we've learned more about what we plan to do with Polonius, if anything.

pnkfelix commented 5 years ago

NLL triage. P-medium. WG-compiler-performance.

jackh726 commented 2 years ago

Unsure if this is still relevant. But retagging wg- label to wg-nll

nnethercote commented 2 years ago

93984 has been merged. It reduced max-rss on CI from 974MB to 399MB, a 2.44x reduction. This wins back enough of the original 2.69x regression that I am happy to declare victory here :smile: