Closed flying-sheep closed 1 year ago
Well, right now there isn't really a much better way (though you could handwrite a recursive function with join
), but I'd like to add a kind of "work queue" abstraction (basically similar to scoped threads) that would probably address this use case.
This seems like a fairly popular feature judging on the number of issues people have created that are slight variations on this general Iterator
to ParallelIterator
adapter idea. Has there been any progress on this?
My particular use case is that I've got an iterator which yields every permutation of 4 numbers from 18 to 200 (around 167,961,600,000,000 permutations), then does a bunch of filters and maps to find a bunch of "ideal" combinations. In this case it's not feasible to save the combinations into a Vec
, yet the filters and maps are all trivially parallelizable.
It's a popular request, but there's not an obvious way to implement it. We've had a few ideas, but not any progress to report for it, sadly.
My particular use case is that I've got an iterator which yields every permutation of 4 numbers from 18 to 200 (around 167,961,600,000,000 permutations),
Perhaps I misunderstand, but wouldn't the number of permutations be (200-18)4 -- roughly 109? If you can set up that generator with ranges, you can parallelize that.
(18..200).into_par_iter().for_each(|x| (180..200).into_par_iter().for_each(|y| ...))
In this case it's not feasible to save the combinations into a Vec,
Another strategy is batch this work. Generate a manageable amount into a Vec
, process those in parallel, and repeat in a loop until you've done them all.
I'm using Rayon to parallelise a fairly CPU heavy operation on a bunch of String
instances coming to me via the csv crate. I wound up here after I did exactly what @cuviper suggested; batching into a Vec
. That gave me a good performance gain, but not as much as should be possible – the csv crate has to do enough work that I lose a lot of time building the batches.
Would a generic solution be to allow Rayon to .par_map()
on the rx side of a channel, or have a templated function that makes that happen given the type of the channel? I could then stick the iterator bit in a thread, have it spit String
instances down the channel, and Rayon could do its thing. This would be even more general than supporting Read
or Iterator
as a provider.
(I'm fairly new to Rust, so apologies if I'm missing something and this is not actually easier than just supporting Iterator
.)
@grahame See also https://github.com/QuietMisdreavus/polyester, but I'm not sure that the cost of allocating String
s makes this worth it.
It looks like this is still open as #550 which added ParallelBridge
was considered only "half" of this issue. But does that really capture the requirement expressed in the original report? Read
in contrast with iterators based on it like Lines
or the above ParseIter
does not have a fixed item count or rather buffer size. To me, it appears that how to go from bytes the items is out of scope for Rayon and this could be closed due to providing ParallelBridge
?
Yes, I think it's fair to say that ParBridge
solves Iterator
input, and we have no plans for Read
.
I see that only ranges and slices can be converted into parallel iterators.
i wonder how to best utilize rayon while reading from a file.
ATM I read files into buffers, then parse them using an iterator. i imagine there’s a better way to use rayon than: