willi19 / swpp202301-compiler-team6

MIT License
0 stars 0 forks source link

[Sprint 2] Changing plans and pivoting away from PreExecutePass #18

Open goranmoomin opened 1 year ago

goranmoomin commented 1 year ago

I've briefly mentioned this privately, but expanding a bit on why I'm moving away from the PreExecutePass (I'm probably going to be bug-fixing Load2AloadPass at this point, and improving some of the low-hanging fruits on our development cycle.)

The PreExecutePass would work best when there are generic functions linked as a library, and the whole program (module in LLVM-parlance) uses only a subset of the total function.

Functions like printf and other utility functions that are generic enough will have logic that the whole program is not using. My plan was to partially execute functions with known arguments (i.e. constant args, or known bits from LVI), and shake out BBs known to be unreachable.

Turns out that all of the benchmarks are algorithmic calculations done straight from read()-ed values, and all of the utility functions are mostly related to memory. These don't fare well from what I was planning.

Preliminary analysis with LVI seems that there are basically 0 basic blocks that we can optimize them with this approach; We're probably not suddenly going to have a more sophisticated static analysis than what LLVM can currently do in a week (though I am frustrated that we're tied to using LLVM 15 and not 16; LLVM 16 AFAIU has some LVI improvements and I couldn't check them) and so I'm abandoning this.

Sorry for being dumb (after all this, it turns out that I should have realized that this isn't going to work out as soon as we got the spec), will try to squeeze out some more perf from Load2AloadPass and future passes.

goranmoomin commented 1 year ago

Honestly at this point I'm just frustrated in being tied to an idiotic machine with an idiotic schedule (8 days for writing a pass, 6 days for a code review? Lolol!), and we don't get to touch any of the other parts.

I mean; if this is a development class, just let us dump all of the low-hanging existing LLVM passes in one go. What's the point in tying our hands and making us wait for a few weeks just to add some meager 6 passes. What's the point in giving the compiler when we're only going to be able to write LLVM passes. Just benchmark with an LLVM IR interpreter lol.

But I digress...

sharaelong commented 1 year ago

Yes, speaking as who experienced competitive programming domain or simply problem solving(PS), given test cases are really close to these domain. So it is easily expected precompute pass can be not good optimization because in many cases every function input is usually constructed which has good scalability to input size or data types (64bit, 128bit number types, and so on). In summary in my view, advancing load2aload pass looks effective to PS domain test cases since input size is usually big enough for cannot manage everything in register. Also logic applied to modifying variables is usually longer than general development. These characteristics will be factors that contribute to the effective functioning of our optimization.