Open TomAFrench opened 1 week ago
Hmm, are we sure it's coming from N different instances of the same callstack? I thought a im::Vector
should be shared behind the scenes. I suppose it's possible these call stacks are all so small that the small vector optimization to copy them instead of sharing them is hurting us rather than helping us.
I'm speculating on the last bit but this memory is definitely associated with the callstacks for the instructions being inserted and I feel that something must be going awry here.
We currently track the callstacks for each instruction in the
locations
field of theDataflowGraph
:https://github.com/noir-lang/noir/blob/3c361c9f78a5d9de1b1bcb5a839d3bc481f89898/compiler/noirc_evaluator/src/ssa/ir/dfg.rs#L94
Heaptrack is reporting that inserting entries into this hashmap is using 17% of the compiler's peak memory.
We should definitely not require 157MB of memory in order to track this as the number of unique callstacks is going to be relatively low. It should be relatively easy to rework this so that we don't track N different instances of the same callstack.