Open hedgar2017 opened 5 months ago
I don't think this is the result of any particular bug, the IR is just still too slow and your contract is large. The fact that it's taking that much memory is worrying, but I suspect it's not a memory leak but just a part of that general inefficiency. I think it may be due to the IR ballooning in size during some intermediate optimization steps. And not necessarily in terms of source code size, but probably some other metric (e.g. number of variables) that makes things harder for our Dataflow Analyzer. This is something we're actively working on right now so things should be getting better quickly in upcoming releases.
We're also doing some general streamlining of the new pipeline - one of such improvements already shipped in 0.8.26 - the new optimizer sequence. Unfortunately I see that in case of your contract it triggers a "StackTooDeep" error. The stack-to-memory mover should be able to work around it but apparently you have some non-memory-safe assembly blocks in there. If you fix them, you should already see some speedup in IR compilation time and memory use.
Another improvement of this kind is already at the proof of concept stage (#15182) and may help a lot if your code has a lot of bytecode dependencies (i.e. is using new
, .creationCode
or .runtimeCode
a lot). I'd be interested to hear how much this helps in your case. It seems to cut compilation time by half in some Foundry-based projects I've benchmarked (e.g. Eigenlayer or Sablier). It would be interesting to see if it helps AAVE too. If you want to try it, you can get development builts of the compiler from our CI:
By the way does viaIR: true
really finish faster than requesting irOptimized
? It would be odd because it's doing much more work. I.e. it's generating the same optimized IR artifact but then also running Yul->EVM transformation on it to generate bytecode. This transformation is a known bottleneck too so not running it should actually make the compiler finish significantly faster.
By the way does
viaIR: true
really finish faster than requestingirOptimized
? It would be odd because it's doing much more work. I.e. it's generating the same optimized IR artifact but then also running Yul->EVM transformation on it to generate bytecode. This transformation is a known bottleneck too so not running it should actually make the compiler finish significantly faster.
Yes, it was faster so I even suspected something wrong on the Yul serializer side. I tried 0.8.26 a few weeks ago and it seemed to reduce the IR emitting time from 40 to 15 minutes.
Though I'm not an AAVE or Foundry contibutor. This inefficiency was a part of a larger problem I was handling on ZKsync toolchain, so I just routed this issue here.
Yes, it was faster so I even suspected something wrong on the Yul serializer side.
Did it complete without errors though? It's possible that code generation runs into a StackTooDeep error and fails. In that case IR generation itself would still finish properly.
StackTooDeep can happen at any point during compilation, even close to the very end. Also, in case of a failure during compilation the compiler still gives you all the analysis output you requested along with the error. The error may be quite hard to notice in unformatted JSON. In fact I just ran into this myself while trying to compile this - it finished pretty fast via IR and I was wondering why until I passed it through jq
and spotted the error there.
I tried 0.8.26 a few weeks ago and it seemed to reduce the IR emitting time from 40 to 15 minutes.
That's nice!
Though I'm not an AAVE or Foundry contibutor. This inefficiency was a part of a larger problem I was handling on ZKsync toolchain, so I just routed this issue here.
Thanks for taking the time to report it then. We've actually been looking for good input for benchmarking. Having a contract that is his pathologically slow to compile will come in handy.
Description
The
irOptimized
emission takes around 40 minutes. Is slightly less if I do not requestirOptimized
but instead compile directly to EVM withviaIR
.The RAM usage is also over the top and close to 25 GB during the last minutes before returning the output.
Environment
Steps to Reproduce
Can be reproduced with the standard JSON containing the AAVE v3 project. Unfortunately, I could not minimize it as it seems to have flaking non-linear effects.
Command line:
./solc-0.8.25 --standard-json json_input_compat_ir.json
Standard JSON input: json_input_compat_ir.json