mcabbott / Tullio.jl

MIT License
615 stars 29 forks source link

[Question] How to change summation order? #162

Open diret47 opened 1 year ago

diret47 commented 1 year ago

I hope to track and change summation order in @tullio. What is the default order when tullio meet repeated indices? And is it possible to change them?

mcabbott commented 1 year ago

Do you mean the order of loop nesting, or the order of traversal? The latter is (for large enough arrays) done in parallel.

diret47 commented 1 year ago

I'm not sure summation order is determinded by loop nesting or traversal. In this example, @tullio sum over 3 indices i,j and k. I mean which index is put inside or outside of the loop by default? Or those three are computed parallel together?

using Tullio
julia> @tullio A[_]:=exp(i+j+k) (i in 1:3,j in 1:3,k in 1:10)
1-element Vector{Float64}:
 3.1763921934292294e7
mcabbott commented 1 year ago

In this case i is innermost. This info block has the order (which it's just taking from the order supplied); verbose=2 prints the actual loops:

julia> @tullio A[_] := exp(i+j+k) (i in 1:3,j in 1:3,k in 1:10)  verbose=true
[ Info: no gradient to calculate
┌ Info: threading threshold (from cost = 11)
└   block = 23832
┌ Info: reduction index ranges
│   i = Base.OneTo(3)
│   j = Base.OneTo(3)
└   k = Base.OneTo(10)
1-element Vector{Float64}:
 3.1763921934292294e7

Maybe worth noting that it hasn't figured out that this is a scalar reduction, and so it will not do this in parallel. (Although it may still break up the iteration space into blocks, rather than iterating all the way in any one index.)

julia> @btime @tullio A[_] := exp(i+j-k) (i in 1:300,j in 1:300,k in 1:1000)
  min 649.319 ms, mean 650.541 ms (1 allocation, 64 bytes)
1-element Vector{Float64}:
 5.495344381637131e260

julia> @btime @tullio A := exp(i+j-k) (i in 1:300,j in 1:300,k in 1:1000)  # 4 threads
  min 164.052 ms, mean 218.762 ms (89 allocations, 4.73 KiB)
5.495344381637131e260
diret47 commented 1 year ago

Thanks for your response and instruction. Therefore in this case, if I reorder index in declaration, would the nesting of loop also change?

julia> @tullio A:= exp(i+j+k) (j in 1:3,i in 1:3,k in 1:10)  verbose=true
[ Info: no gradient to calculate
┌ Info: threading threshold (from cost = 11)
└   block = 23832
┌ Info: reduction index ranges
│   j = Base.OneTo(3)
│   i = Base.OneTo(3)
└   k = Base.OneTo(10)
3.1763921934292294e7

Moreover, for another case with Einstein notation, it seems that the default order of loop is the index order from left to right. Could one change them through some options?

julia> using Tullio

julia> a=rand(10,10,10);b=rand(10,20);

julia> @tullio A[i,j]:=a[i,k,k]*b[k,j];

julia> @tullio A[i,j]:=a[i,k,k]*b[k,j] verbose=true
┌ Info: symbolic gradients
│   inbody =
│    2-element Vector{Any}:
│     :(𝛥a[i, k, k] = 𝛥a[i, k, k] + 𝛥ℛ[i, j] * conj(b[k, j]))
└     :(𝛥b[k, j] = 𝛥b[k, j] + 𝛥ℛ[i, j] * conj(a[i, k, k]))
┌ Info: threading threshold (from cost = 1)
└   block = 262144
┌ Info: left index ranges
│   i = Base.OneTo(10)
└   j = Base.OneTo(20)
┌ Info: reduction index ranges
└   k = Base.OneTo(10)
mcabbott commented 1 year ago

would the nesting of loop also change?

Yes.

When there are arrays, you can't really change it, there's no option to specify this. (Sometimes re-ordering an expression a[...] * b[...] to b[...] * a[...] may cause it to pick a different order.)

But the goal is not to care. For large arrays, when cache-friendliness is a reason to care a lot about loop order, most einsum expressions will have some arrays in the wrong order. Tullio runs a fairly crude (cache-oblivious?) blocking strategy, which in many cases makes the outer loop order not matter much:

julia> let n = 1
         a=rand(10n,10n,10n); b=rand(10n,20n);
         @tullio A[i,j]:=a[i,k,k]*b[k,j]  # make A
         @btime @tullio $A[i,j] = $a[i,k,k] * $b[k,j]  # write into A
         B = transpose(permutedims(A))
         @btime @tullio $B[i,j] = $a[i,k,k] * $b[k,j]  # opposite memory order
       end;
  min 1.046 μs, mean 1.057 μs (0 allocations)  # too small to multi-thread
  min 1.046 μs, mean 1.057 μs (0 allocations)

julia> let n = 100
         a=rand(10n,10n,10n); b=rand(10n,20n);
         @tullio A[i,j]:=a[i,k,k]*b[k,j]  # make A
         @btime @tullio $A[i,j] = $a[i,k,k] * $b[k,j]  # write into A
         B = transpose(permutedims(A))
         @btime @tullio $B[i,j] = $a[i,k,k] * $b[k,j]  # opposite memory order
       end;
  min 238.708 ms, mean 364.186 ms (50 allocations, 2.56 KiB)  # threads + blocks
  min 251.204 ms, mean 358.758 ms (51 allocations, 2.59 KiB)  # ... so that wrong order hardly matters

Maybe not the best example, since in both of these, the major impact of blocking is on mixing up the reduction loop over k with the outer ones on i,j. With threads=false these are equally slow:

  min 1.939 s, mean 1.939 s (0 allocations)
  min 1.937 s, mean 1.948 s (0 allocations)

None of this is very configurable. Making it so seemed like a big project, and a bigger library... something more like Halide.

Maybe also worth noting that if you load LoopVectorization, then that will often re-order inner loops. Tullio decides loop order in advance, from looking only at the expression provided, but LV waits to see the actual types & then uses generated code which knows that e.g. Array has stride 1 on 1st dim. It has much more detailed cost modelling to choose what to do.