Closed kronbichler closed 2 years ago
it might be that we need to force inlining for all lambdas we use.
I would have thought that the compiler inlines the code since the type of the lamdbas is templated.
In any case, we need to do a more comprehensive analysis of what instructions get generated and how to avoid them. I would like to finish #9794 first because that might also reveal something similar, so better address things only once.
This is an important issue. But I would suggest to postpone. There will be additional modifications coming to make element-centric loops working for gradients and Hermite basis.
@kronbichler If I remember FEEvaluationData
helped here, didn't it?
It did definitely help, but we need to look at the assembly code at some point before the release, so we should keep this open.
I now made my detailed analysis for the effect on a comparison between running with the templated polynomial degree against "fe_degree=-1` with the pre-compiled code from the deal.II library. Overall, I think that our goal was accomplished very well by the cleanup in #13056 and related pull requests. I can summarize my findings for a slightly changed version of step-59 (polynomial degree 4, 3D problem, double + float numbers):
libdeal_II.so
file.FEFaceEvaluation::integrate_scatter
on the float numbers is from 4.123 billion instructions to 4.243 billion instructions, an increase by around 2.5%. The biggest part is now not the actual stepping into the functions, but rather the compiler generating slightly bigger prologue/epilogue sections when the entry point to the function calls is not as well-defined (pushing more registers to the stack for restoring them at entry), and similar minor optimizations a compiler can do when the call context is very well-defined. Or to put it differently, we waste around 3.5% of the instructions between the place where we initiate the integrate_scatter
call and once we start the actual work, compare to 1% for the "compile-everything-at-once" approach.If we really wanted to improve this more, we could replace the stepping into algorithms by a jump table, thus cutting off maybe some 50 instructions per call to gather_evaluate
/integrate_scatter
. In the grand scheme of things, I cannot see this making a big difference.
What I also saw in my analysis is the fact that despite our efforts last fall and previously, the boilerplate and selection code in the https://github.com/dealii/dealii/blob/cd34b0849ad82199eaa881386b167419eb34e34a/include/deal.II/matrix_free/evaluation_kernels.h#L4595-L4603 function still eats quite some instructions. For AVX-8 and float (8-wide), I see around 200-250 instructions to set up the various options, before the vector access work starts. This was not the purpose of this issue, and it is likely impossible to do much, given the functionality available in that function and the implementation effort to fix it. If anything, we need to fix it at a higher level, e.g. by having a "evaluate + integrate on cells and faces" super function like the low-level form used here: https://github.com/kronbichler/multigrid/blob/9e2a8781cad9da616166e6936d60915780b34ade/common/laplace_operator_dg.h#L932-L1657
So to summarize, this issue has been resolved in a good way, there are no further needs at this point.
After implementing #10811 and the later re-structuring like #10904, I see that the Laplacian evaluated with
gather_evaluate
andintegrate_scatter
on the faces runs more slowly withFE_DGQHermite
than withFE_DGQ
, at least in 2D and with degree 3. This should not be the case because the Hermite case should access only half the vector data and also do somewhat fewer operations otherwise. We need to see over the performance before the next release. At least the gcc compiler generates way too many integer instructions to pass data among the various functions; it might be that we need to force inlining for all lambdas we use.