Open rakhimov opened 8 years ago
Oh, I hate when that happens!
On 1/18/2017 11:57 PM, Olzhas Rakhimov wrote:
Boost accumulators don't support parallelism via map-reduce or combine.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/rakhimov/scram/issues/140#issuecomment-273682269, or mute the thread https://github.com/notifications/unsubscribe-auth/AIc65MZeP3bmmFJgcDewcD80zCuM5dOeks5rTu0cgaJpZM4I4Jd1.
@jto888 I am not sure how bad the naive implementation would be: with one vector per thread, and push all the data in the vectors into one accumulator (the reduce step).
It would be inefficient, but sampling and calculating probabilities/values would be much more expensive in my estimates (like 200x with many cache misses) at least initially.
An alternative approach is to implement custom combine<tag::stat_measure>(accumulators...)
per statistical measure.
This is simple for mean and stddev
but doesn't sound fun when statistical measures get complex.
Boost accumulators don't support parallelism via map-reduce or combine.