Closed ianthomas23 closed 1 year ago
Merging #1217 (c6ad0e4) into main (8092f4d) will increase coverage by
0.09%
. The diff coverage is84.72%
.
@@ Coverage Diff @@
## main #1217 +/- ##
==========================================
+ Coverage 84.52% 84.62% +0.09%
==========================================
Files 35 35
Lines 8369 8591 +222
==========================================
+ Hits 7074 7270 +196
- Misses 1295 1321 +26
Impacted Files | Coverage Δ | |
---|---|---|
datashader/data_libraries/pandas.py | 100.00% <ø> (ø) |
|
datashader/reductions.py | 83.58% <81.49%> (+0.47%) |
:arrow_up: |
datashader/compiler.py | 87.91% <83.33%> (-2.66%) |
:arrow_down: |
datashader/data_libraries/dask.py | 95.23% <100.00%> (+0.07%) |
:arrow_up: |
datashader/data_libraries/dask_xarray.py | 98.95% <100.00%> (ø) |
|
datashader/utils.py | 81.86% <100.00%> (+2.61%) |
:arrow_up: |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
This PR implements the use of CUDA mutexes in a
where
reduction by wrapping the use ofwhere
and itsselector
in the same mutex lock region. Previously the two were separate, which can lead to some hard to track down variations in results as multiple CUDA threads could potentially try to alter the same data at the same time. The wrapping occurs in thecompiler.py
make_append
function, which is where the existing special handling ofwhere
reductions takes place.Whilst doing this I have chosen to move all use of a CUDA mutex from
reductions.py
tocompiler.py
so that it is all in one place. This benefits us in that it reduces the future proliferation ofreduction._append_cuda
functions which might have otherwise needed versions both with and without mutex locking.I have also rewritten the handling of return values from
reduction._append_cuda
functions to explicitly check if the function call has changed the data stored in theagg
or not, as this wasn't always correct before when handlingNaN
s.This work is a necessary precursor to completing the CUDA support for
where(max_n)
and similar (issues #1182 and #1207) which will follow shortly.