astrofrog / fast-histogram

:zap: Fast 1D and 2D histogram functions in Python :zap:
BSD 2-Clause "Simplified" License
266 stars 28 forks source link

Drop the gil during calculations #31

Closed henryiii closed 6 years ago

henryiii commented 6 years ago

This is an attempt to address #30.

astrofrog commented 6 years ago

@henryiii - I don't understand the implications fully of doing this - in particular, is this safe given that we are modifying count and dataptr inside these blocks? Doesn't it mean that two threads could accidentally modify the same array?

Do you see any performance improvements with this?

henryiii commented 6 years ago

Releasing the GIL means Python can continue, and potentially change the variables users have access to. Since a user does not have access to count (you created the array it points at), and you are only reading from dataptr, it should be safe.

For benefit, it means that fast_histogram is now "nice" in threading, allowing python to continue while it calculates. I'll test it in a little example and post that in a minute.

henryiii commented 6 years ago

Before the patch, the "split" version takes about the same amount of time as the normal version. This is on a dual-core processor with hyperthreading. Splits of 2 is about 4.8 ms.

import fast_histogram
import numpy as np
from concurrent.futures import ThreadPoolExecutor
from functools import partial
ranges=((-1,1),(-1,1))
vals = np.random.normal(size=[2, 1000000]).astype(np.float32)
%%timeit
results = fast_histogram.histogram2d(*vals, bins=100, range=((-1,1),(-1,1)))
8.57 ms ± 436 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
%%timeit
splits = 4
with ThreadPoolExecutor(max_workers=splits) as pool:
    chunk = vals.shape[1] // splits
    chunk0 = [vals[0,i*chunk:(i+1)*chunk] for i in range(splits)]
    chunk1 = [vals[1,i*chunk:(i+1)*chunk] for i in range(splits)]
    f = partial(fast_histogram.histogram2d, bins=100, range=((-1,1),(-1,1)))
    results = pool.map(f, chunk0, chunk1)
    results = sum(results)
4.14 ms ± 65.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

I believe, on an 24 core machine with 10M elements, this can be about 8x faster.

astrofrog commented 6 years ago

Thanks for the explanation! This makes sense. So just to check, does this mean one could potentially run into issues if modifying the input arrays while the calculation is running? I think this is probably an acceptable risk, but just wanted to check.

henryiii commented 6 years ago

Yes, you might grab the old or the new version of the array values. Since you are not modifying them, you can't create a problem though. Reading is not an issue in multithreading, only writing is. You still own a reference to the memory, so you can't have the array deleted from under you or anything like that.