Open Quuxplusone opened 8 years ago
Attached slpMin.ll
(2634 bytes, application/octet-stream): ll version of the example
Eyal, thank you for looking at this. I think that modeling spills at the SLP-vectorizer level is a really bad idea. The SLP vectorizer usually reduces register pressure because it enables the use of vector registers in addition to scalar registers. Attempting to predict register pressure is pointless. It's very difficult to guess what ISel and the scheduler would do, especially when vectorizing across basic blocks. This whole thing feels like a hack that was inserted to handle one specific workload and I suspect that if we simply remove this code we won't see any regressions in the LLVM test suite.
I suggest that we rip out getSpillCost.
Indeed, remat or e.g. scheduling calls above loads can mitigate spill costs, as
hinted in the attached example. And using vector in addition to scalar
registers should be an advantage, if they are available. However, consider the
3rd issue - in order to best support TSLP, if we encounter calls whose
conventions have no callee saved vector registers, this should affect our
throttling process, right?
As for ripping getSpillCost out, this should best be championed by its authors.
This PR suggests improvements and implicitly that its tests should cover a
wider range of tree(s).
slpMin.ll
(2634 bytes, application/octet-stream)