@omp.parallel(private=['a', 'b'], shared=['b'], reduce=[operator.add, 'c'])
def parallel_section(a, b, c):
if omp.master:
b[...] = 0
c[...] = 0
omp.barrier()
for i in omp.range(0, 1000, schedule='static'):
c[...] += i
a[...] += i
with omp.critical:
b[...] += i
a = numpy.empty((,), dtype='i8')
b = numpy.empty((,), dtype='i8')
c = numpy.empty((,), dtype='i8')
parallel_section(1, b, c)
assert c == (0 + 999)*1000 / 2
assert b == (0 + 999)*1000 / 2
How to handle exceptions?
ProcessGroup can probably do this well enough.
We need a way to ensure MapReduce dispatches exactly one payload per process. An ordered at the begining of the work function should be able to achieve this.
The old openmp-like support was dropped in the single-file rewrite. Time to think about this more carefully.
ProcessGroup
can probably do this well enough.