Closed vchuravy closed 1 week ago
No since I add it to this as well https://github.com/JuliaParallel/MPI.jl/blob/609a8feab39a8dd8d6649f548314b00b8c612945/test/test_reduce.jl#L70
Also, should we add a reference to @Op
in the error messages at https://github.com/JuliaParallel/MPI.jl/blob/780aaa0fdb768713a329659338a9c9cde23c41a8/src/operators.jl#L95 "[...] As an alternative, use the MPI.@Op
macro...
, it's a bit unfortunate that this is introducing an alternative method to Op to define custom operators: generic libraries will have to pick either @Op or Op (presumably the former if they want to work on all platforms), but I guess we don't have more comprehensive solutions at the moment.
The situation is even funnier. Op
tries to handle all the things generally, so it takes a function f
and doesn't know whether it is a closure or not. @Op
"just" creates a method of Op
that dispatches on the function f
and tada...
So it doesn't introduce an alternative method, you still have to call Op
it just "pre-registers" f
.
I like the new name much better, it makes the difference clearer, thanks!
Should update documentation at https://juliaparallel.org/MPI.jl/stable/knownissues/#Custom-reduction-operators?
With
diff --git a/docs/examples/03-reduce.jl b/docs/examples/03-reduce.jl
index 86d22be1..26aa9658 100644
--- a/docs/examples/03-reduce.jl
+++ b/docs/examples/03-reduce.jl
@@ -33,8 +33,13 @@ end
X = randn(10,3) .* [1,3,7]'
+# Register the custom reduction operator. This is necessary only on platforms
+# where Julia doesn't support closures as cfunctions (e.g. ARM), but can be used
+# on all platforms for consistency.
+MPI.@RegisterOp(pool, SummaryStat)
+
# Perform a scalar reduction
-summ = MPI.Reduce(SummaryStat(X), pool, root, comm)
+summ = MPI.Reduce(SummaryStat(X), pool, comm; root)
if MPI.Comm_rank(comm) == root
@show summ.var
@@ -42,7 +47,7 @@ end
# Perform a vector reduction:
# the reduction operator is applied elementwise
-col_summ = MPI.Reduce(mapslices(SummaryStat,X,dims=1), pool, root, comm)
+col_summ = MPI.Reduce(mapslices(SummaryStat,X,dims=1), pool, comm; root)
if MPI.Comm_rank(comm) == root
col_var = map(summ -> summ.var, col_summ)
I can run the custom reduction example in the documentation on aarch64-darwin, it errors in https://github.com/JuliaParallel/MPI.jl/blob/aac9688e6961bc7e3aeeba7600f5e7d0b10596a3/src/operators.jl#L98 without this change. I think we may want to suggest users to look at MPI.@RegisterOp
in that error message.
It turns out staticarrays is used? I'm missing where though 😕
Last bit: update the docsaabout known issues https://github.com/JuliaParallel/MPI.jl/pull/871#issuecomment-2359652588 and then we should be good to go for me!
@giordano done!
Could we make use of the potential PerProcess
construct here (https://github.com/JuliaLang/julia/pull/55793)?
Could we make use of the potential PerProcess construct
Yeah, I think that could be used to generally simplify the implementation, and also our home-grown implementation for other globals.
Need to delete https://github.com/JuliaParallel/MPI.jl/blob/609a8feab39a8dd8d6649f548314b00b8c612945/test/test_reduce.jl#L7-L10 to actually test this?