Closed matthijs closed 2 years ago
This is a missing feature. There currently isn't a satisfactory solution. You could when_all
N at a time in a loop. That's the best you can do today.
Ah thanks!
If anyone is coming across this limitation, I implemented this very simple snippet to handle this:
template<typename Rng, std::size_t... I>
constexpr auto when_all_vec_impl(const Rng& rng, std::index_sequence<I...>)
{
return unifex::when_all(std::move(rng[I])...);
}
template<typename Rng, std::size_t N, typename Indices = std::make_index_sequence<N>>
constexpr auto when_all_vec(const Rng& rng)
{
return when_all_vec_impl(rng, Indices{});
}
// Create tasks...
std::vector<unifex::task<void>> tasks;
//tasks.push_back(...);
// Call when_all_vec
constexpr std::size_t num = 100;
unifex::syncwait(when_all_vec<decltype(tasks), num>(tasks)); // assuming tasks has at least 100 items
Of course you need to handle the remaining tasks in the tasks container.
There currently isn't a satisfactory solution.
libunifex newbie here. I'm curious - is there a fundamental problem preventing a satisfactory solution? Would some sender/receiver analogy of cppcoro's when_all
do the trick (i.e., when_all
on a container requires memory allocation)?
Separate question .. in general, I can't find any wording for when_all
accepting a vector of senders (or range, or begin and end iterators) in P2300. Would P2300 one day have such language, or would that come in another paper?
There's nothing technical preventing a range-based sender/receiver when_all
algorithm. Nobody has written one, that's all.
What I neglected to mention in my original answer is that the best way to do this in libunifex today is to spawn the work in an async_scope
and then wait on the scope for all the work to finish.
EDIT: And to answer your other question, P2300 will probably not get more algorithms, but there are certain to be follow-on papers that add more algorithms, this one included.
EDIT 2: I know @lums658 wants this algorithm also and is trying his hand at implementing it, but I don't know if it's for libunifex or the P2300 reference implementation.
Thank you - I'll check out async_scope
as well.
As a learning exercise for myself, I've got a half baked implementation working that's effectively a "replace tuple with vector" version of the variadic implementation, if anyone wants to compares notes.
async_scope
seems to do just the trick, acting as a nice provider of heap allocation as needed for creating a when_all
for a runtime container of tasks. I'll leave https://github.com/ccotter/libunifex/blob/when_all/examples/when_all_scope.cpp#L61 as a partially complete attempt at a generic solution built on top of async_scope
with examples at the bottom of the file.
when_all_range
algorithm. It takes either std::vector<Sender>
or a pair of Iterator
.
Hi,
I am trying to schedule a lot of tasks on the threadpool. An example of what I am trying to achieve is probably easier to understand:
When using the for loop with 'sync_wait' it is still sequential (obvious). How can I achieve this in such a way that the coroutines are scheduled and executed on the threadpool parallel?
Regards, Matthijs