executors / futures

A proposal for a futures programming model for ISO C++
22 stars 7 forks source link

`.bulk_then` AKA we need a high-level API for bulk #24

Open brycelelbach opened 6 years ago

brycelelbach commented 6 years ago

.then on a future produced by a bulk executor interface does not seem very useful, because you cannot attach a bulk continuation to it. .then inherently launches a single-threaded continuation.

We should probably think about what a .bulk_then (e.g. high-level interface to bulk_then_execute) would look like.

@jaredhoberock, thoughts?

jaredhoberock commented 6 years ago

I agree that bulk_then would be a valuable convenience built on top of BulkThenExecutors. It's not clear to me that it should be the member function of futures. I would like for it to take an execution policy parameter. I think I'd prefer tackling bulk_then when/if we introduce other bulk convenience control structures like bulk_invoke & bulk_async to ensure they all work in the same basic way.

dhollman commented 6 years ago

I think we want to think carefully about this. We probably want a more robust abstraction that can carry shape information from one continuation to another to allow, for instance, executors to do coarse grain task graph refinement (i.e., connect indices across executions without a strict join). We also run the risk of people hiding a bulk_execute call inside a .then(), leading to loss of potentially critical information in the scheduler.

But I think what we want is a broader abstraction for this, not just tacking a .bulk_then onto a future. (Though maybe that's a reasonable place to start?)

LeeHowes commented 6 years ago

I put it in my example for the Jacksonville meeting, but it was a simplistic and very clumsy API so I think we can safely leave it out of V1.

If we have future collapsing then .then([](){return parallel_algorithm(...);}).then(...) works fairly well. We could also build lazy parallel algorithms that return a promise rather than a future such that we could do: .continue_with(parallel_algorithm).then(...) where the future is returned by future.continue_with, not by the algorithm itself.

So I agree, lets defer it and at least consider these options as part of the parallel algorithm updates with .on.