Closed ianthomas23 closed 1 year ago
Merging #1212 (a95bb1f) into main (31b3182) will decrease coverage by
0.13%
. The diff coverage is56.60%
.
@@ Coverage Diff @@
## main #1212 +/- ##
==========================================
- Coverage 84.63% 84.50% -0.13%
==========================================
Files 35 35
Lines 8354 8368 +14
==========================================
+ Hits 7070 7071 +1
- Misses 1284 1297 +13
Impacted Files | Coverage Δ | |
---|---|---|
datashader/reductions.py | 83.11% <0.00%> (ø) |
|
datashader/compiler.py | 90.56% <16.66%> (-2.34%) |
:arrow_down: |
datashader/transfer_functions/_cuda_utils.py | 22.60% <26.31%> (-0.93%) |
:arrow_down: |
datashader/glyphs/area.py | 79.82% <83.33%> (ø) |
|
datashader/glyphs/line.py | 92.84% <100.00%> (ø) |
|
datashader/glyphs/points.py | 88.29% <100.00%> (ø) |
|
datashader/glyphs/polygon.py | 94.80% <100.00%> (ø) |
|
datashader/glyphs/quadmesh.py | 83.83% <100.00%> (ø) |
|
datashader/glyphs/trimesh.py | 92.36% <100.00%> (ø) |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Excellent, thanks!
Fixes #1211.
For
numba >= 0.57
there is a CUDA mutex that is lockable on a per-pixel basis whereas earliernumba
doesn't have the required functionality so there is a single lock shared between all pixels. Complicated CUDA reductions such asmax_n
need the mutex so that they work atomically; the new mutex allows the CUDA code to work in parallel very efficiently whereas the old solution forces datashader to run mostly serially.There is very little code changed here. The
numba.__version__
is used to select the appropriate mutex lock and unlock functions, and the size of thecupy
array used for the mutex. Other than that there is a new argument to theinfo
function as the canvas shape is needed. Although this argument is only needed for CUDA code running a complicated reduction such asmax_n
, I have chosen to pass it for every call rather than dynamically determine at the point of calling whether it is needed or not.As an illustration of performance, demo code:
and time the
canvas.points
call, discarding the first timing as this includesnumba
compilation. This gives the following timings (usingnumba 0.56.3
andnumba 0.57
for theslow
andfast
mutex):GPU on test system is an Nvidia Quadro T1000.