aboria / Aboria

Enables computations over a set of particles in N-dimensional space
https://aboria.github.io/Aboria
Other
106 stars 31 forks source link

add ability to use c++17 parallel algorithms #13

Open martinjrobins opened 7 years ago

martinjrobins commented 7 years ago

v0.5 added ability to use Thrust library for parallel algorithms. This is done in src/detail/Algorithms.h using custom Aboria algorithms, which use tag dispatching to either call Thrust or STL algorithms, depending on if the Level 0 vector is a STL vector or a Thrust vector.

For STL vectors, and if using a c++17, parallel algorithms could be used for all algorithms, this functionality needs to be added (i.e. detect if c++17 available, and use a parallel executor if so)

preejackie commented 7 years ago

@martinjrobins. All algorithms cannot parallelizable. Please provide some more informations. Thanks Pree

martinjrobins commented 7 years ago

There are a number of functions/algorithms in src/detail/Algorithms.h that are based on the parallel algorithms in Thrust (https://thrust.github.io/), eg. for_each, gather, scatter_if, inclusive_scan, transform_exclusive_scan. These algorithms are used by Aboria in numerous places, and it is assumed that every algorithm is run in parallel.

Every data structure in Aboria is based on a STL-like vector. The default is to use std::vector, but you can also choose to use thrust::device_vector. Passing a thrust::device_vector to any of the functions in stc/detail/Algorithms.h will call the Thrust equivalent algorithm, passing a std::vector::iterator will call the STL equivalent algorithm, or (if this doesn't exist), an custom implementation of the algorithm.

Naturally, all the Thrust algorithms are already parallelized, I would like do do the same for the STL side. It seems like the easiest way to approach this is to use the new parallel algorithms in c++17, but I am open to alternatives.

(btw. I'm also not happy with the organisation of src/detail/Algorithms.h, everything is just lumped in one file and difficult to find, so any improvements here would also be welcome)