chapel-lang / chapel

a Productive Parallel Programming Language
https://chapel-lang.org
Other
1.8k stars 423 forks source link

Performance of reductions: serializing small reductions / lowering overheads within parallel constructs #24243

Open bradcray opened 10 months ago

bradcray commented 10 months ago

Summary of Problem

A theme in some recent user issues (https://github.com/chapel-lang/chapel/issues/24196, https://github.com/chapel-lang/chapel/issues/22756) has been how expensive reductions can be when: (a) they are reducing something simple and modest in size, for which creating any tasks at all is overkill (b) they occur within already-parallel constructs (e.g., forall loops), where the overhead seems too high compared to a serial for loop

In both cases, as a workaround, the user could rewrite the reduction serially (e.g., + reduce for a in A do a), but this is annoying and non-productive. For that reason, it would be more attractive if we were to:

The second may be easier to implement since it can likely be decided dynamically in a conditional. The first seems harder (or at least, vaguer) to me since the decision of where to apply the heuristic could vary depending on architecture, reduction expression, etc.

damianmoz commented 10 months ago

I did not realize that

+ reduce for ....

is a serial reduction. Given that, can the following

+ reduce foreach ....

be made to be a vectorized reduction. That would be explicit. I could live with that, even if it is not as succinct as I would like.

I am not a fan of the vaguer approach.

I still think a keyword like reducev or something similar would be more succinct. Even the keyword serial qualifying reduce would work for me if that can be made to work syntactically.

bradcray commented 10 months ago

Hi @damianmoz —

Given that, can the following

+ reduce foreach ....

be made to be a vectorized reduction

This is the hope and intention, yes. Historically, this has not happened because we didn't support foreach expressions (https://github.com/chapel-lang/chapel/issues/19336), which meant the loop couldn't appear in that position, syntactically. But @DanilaFe has recently been working on adding these in https://github.com/chapel-lang/chapel/issues/19336, so we're getting closer. Meanwhile, @stonea has been working on adding support for with-clauses to foreach loops, which may be required to have a reduce + foreach expression do the right thing—I'm not certain though I suspect it is.

mppf commented 10 months ago

implement heuristics that would serialize reductions for sufficiently small/serial cases

Some other ideas here:

damianmoz commented 10 months ago

For future reference, for this, #24196 and #14000. here is a serial max reduction of the absolute values of a 1D array with unit stride (as would be 99% of the arrays that one would see in linear algebra applications). Any suggested improvements are most welcome.

inline proc maxofabs(const v : [?D] real(?w))
{
    inline proc tmax(x : real(w), y : real(w)) // version of what is in AutoMath
    {
        return if x > y || x != x then x else y;
    }
    param zero = 0:real(w);
    const n = D.size;
    const k = D.low;
    var v0 = zero, v1 : real(w), v2 : real(w), v3 : real(w);

    select(n & 3) // use a head+body split (i.e. do not use body+tail)
    {
    when 0 do (v1, v2, v3) = (zero, zero, zero);
    when 1 do (v1, v2, v3) = (zero, zero, abs(v[k]));
    when 2 do (v1, v2, v3) = (zero, abs(v[k]), abs(v[k + 1]));
    when 3 do (v1, v2, v3) = (abs(v[k]), abs(v[k + 1]), abs(v[k + 2]));
    }
    if n > 7 then foreach j in k + (n & 3) .. D.high by 4 do // use a loop when n>8
    {
        v0 = tmax(v0, abs(v[j]));
        v1 = tmax(v1, abs(v[j + 1]));
        v2 = tmax(v2, abs(v[j + 2]));
        v3 = tmax(v3, abs(v[j + 3]));
    }
    else if n > 3 then // avoid loop overhead when 4 <= n < 7
    {
        const j = k + (n & 3);

        v0 = abs(v[j]);
        v1 = tmax(v1, abs(v[j + 1]));
        v2 = tmax(v2, abs(v[j + 2]));
        v3 = tmax(v3, abs(v[j + 3]));
    }
    return tmax(tmax(v0, v2), tmax(v1, v3));
}
bradcray commented 9 months ago

Just noting that @jeremiah-corrado was helping users who ran into this flavor of issue today. Tagging @stonea as I think this is starting to become one of the performance gotchas that stands out more and more for me as being problematic.

e-kayrakli commented 9 months ago

implement heuristics that would serialize reductions for sufficiently small/serial cases

Re hard/vagueness of this bit: it is not worrisome to me. We already use such heuristics for array initialization and parallel bulk copy between arrays. I do agree that it is hard to find the perfect threshold, but I am optimistic that there's a number that can make the current painful cases go faster. We can always pick a conservative serialization threshold by default and make it tunable by the user.

bradcray commented 9 months ago

A difference between array initialization and reductions it that the reduction could be:

var sum = + reduce [i in 0..<here.maxTaskPar] veryExpensiveProcedure(i);

where the trip count may be modest (4–128, say) suggesting serialization, but the body of the loop suggests parallelizing.

Setting up a heuristic for the common "just reduce an array" case:

var sum = + reduce A;

is easy and more similar to array initialization, but then is very brittle to modest changes like:

var sum = + reduce (A != 0);
var sum = + reduce [a in A] (a != 0);
var sum = + reduce [i in A.domain] (A[i] != 0);
etc.

So it seems to me like we'd need to do some sort of evaluation of the "weight" of the expression being reduced, which feels new and more complex (and like something that we could/might want to apply to parallel loops in general and not just those involved in reductions).

But I'd be very happy if you were to point out that I'm missing something. :)

e-kayrakli commented 9 months ago

But I'd be very happy if you were to point out that I'm missing something.

Nope. I don't think you are.

But I may be more (naively?) optimistic than you that we can make common cases significantly faster by potentially sacrificing some less common cases' performance and that it could be a achievable net win in the near term.