Closed ianthomas23 closed 1 year ago
Merging #1239 (fc467e6) into main (6dce648) will not change coverage. The diff coverage is
100.00%
.
@@ Coverage Diff @@
## main #1239 +/- ##
=======================================
Coverage 83.52% 83.52%
=======================================
Files 35 35
Lines 8778 8778
=======================================
Hits 7332 7332
Misses 1446 1446
Impacted Files | Coverage Δ | |
---|---|---|
datashader/utils.py | 82.48% <ø> (ø) |
|
datashader/core.py | 88.28% <100.00%> (ø) |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Fixes #1202.
This fixes datashader's handling of categorical columns in dask partitions. It was broken by a change in dask's reading of categorical columns between releases
2022.7.0
and2022.7.1
(dask/dask#9264) but really is due to our slightly inconsistent handling of categorical columns which we got away with historically but not after that change.Relevant section of dask docs is https://docs.dask.org/en/stable/dataframe-categoricals.html. When categorical columns are read from parquet files there is no enforcement of the columns being the same across dask partitions for performance reasons, and we need to do this ourselves. They provide a couple of solutions, both of which incur a performance loss as they involve traversing the whole dataframe.
Cutting a long story short, the fix is actually a two-liner in datashader. We already correct the categorical columns in
dshape_from_dask()
by callingcategorize
passing a list of the categorical columns to correct. The correcteddask.DataFrame
is used in datashader to work out the full list of categories for the categorical dimension of the returnedxarray.DataArray
. But the originally supplied (and therefore uncorrected)dask.DataFrame
was still used in each individual partition's mapping of category to integer index. So the fix is to use the categorically-correctedDataFrame
for the remainder of the datashader calculations. As we were already performinng thecategorize
call, we incur no performance loss by doing this.I have added a new explicit test.
Using the USA 2010 Census data and the following test code:
before the fix we see for
![before_pyarrow](https://github.com/holoviz/datashader/assets/580326/b4d05eb4-f7e4-42d2-aa62-6dfc42841dea)
fastparquet
andpyarrow
readers respectivelyand after the fix
![after_pyarrow](https://github.com/holoviz/datashader/assets/580326/fae98329-4a93-48c5-8007-99bf504782d5)