Open Drvanon opened 11 months ago
I was thinking about this earlier today. Though possible (and significantly benificial) to identify chain sections that do not depend on each other and perform those parts in parallel, that might be very challenging. What might be simpler is to provide a parallel API for the map functions, where the pydash.collections.itermap
function is replaced with a threaded_map
function.
Something maybe like:
import multiprocessing
pool = multiprocessing.Pool()
def pooled_iter_map(collection, iteratee):
return pool.map(collection, iteratee)
def pooled_map(collection, iteratee):
return list(pooled_iter_map(collection, iteratee))
def pooled_flat_map(collection, iteratee=None):
return pyd.flatten(pooled_iter_map(collection, iteratee=iteratee))
Chains offer an amazing opportunity for parallalelization, since unless a call to "thru" is encountered (or a function accepts the whole input), all calls can be parallelized. Right now, when I execute the following:
Where as I believe that the following output would also be quite possible: