holoviz / datashader

Quickly and accurately render even the largest data.
http://datashader.org
BSD 3-Clause "New" or "Revised" License
3.26k stars 363 forks source link

Ensure categorical column order is the same across dask partitions #1239

Closed ianthomas23 closed 1 year ago

ianthomas23 commented 1 year ago

Fixes #1202.

This fixes datashader's handling of categorical columns in dask partitions. It was broken by a change in dask's reading of categorical columns between releases 2022.7.0 and 2022.7.1 (dask/dask#9264) but really is due to our slightly inconsistent handling of categorical columns which we got away with historically but not after that change.

Relevant section of dask docs is https://docs.dask.org/en/stable/dataframe-categoricals.html. When categorical columns are read from parquet files there is no enforcement of the columns being the same across dask partitions for performance reasons, and we need to do this ourselves. They provide a couple of solutions, both of which incur a performance loss as they involve traversing the whole dataframe.

Cutting a long story short, the fix is actually a two-liner in datashader. We already correct the categorical columns in dshape_from_dask() by calling categorize passing a list of the categorical columns to correct. The corrected dask.DataFrame is used in datashader to work out the full list of categories for the categorical dimension of the returned xarray.DataArray. But the originally supplied (and therefore uncorrected) dask.DataFrame was still used in each individual partition's mapping of category to integer index. So the fix is to use the categorically-corrected DataFrame for the remainder of the datashader calculations. As we were already performinng the categorize call, we incur no performance loss by doing this.

I have added a new explicit test.

Using the USA 2010 Census data and the following test code:

import dask.dataframe as dd
import datashader as ds

cvs = ds.Canvas(plot_width=450, plot_height=265, x_range=[-14E6, -7.4E6], y_range=[2.7E6, 6.4E6])

for engine in ('fastparquet', 'pyarrow'):
    df  = dd.read_parquet('~/data/census2010.parq', engine=engine)
    agg = cvs.points(df, 'easting', 'northing', ds.count_cat('race'))
    color_key = {'w':'aqua', 'b':'lime', 'a':'red', 'h':'fuchsia', 'o':'yellow' }
    img = ds.tf.shade(agg, how='eq_hist', color_key=color_key)
    ds.utils.export_image(img, f'issue1202_{engine}')

before the fix we see for fastparquet and pyarrow readers respectively before_fastparquet before_pyarrow

and after the fix after_fastparquet after_pyarrow

codecov[bot] commented 1 year ago

Codecov Report

Merging #1239 (fc467e6) into main (6dce648) will not change coverage. The diff coverage is 100.00%.

@@           Coverage Diff           @@
##             main    #1239   +/-   ##
=======================================
  Coverage   83.52%   83.52%           
=======================================
  Files          35       35           
  Lines        8778     8778           
=======================================
  Hits         7332     7332           
  Misses       1446     1446           
Impacted Files Coverage Δ
datashader/utils.py 82.48% <ø> (ø)
datashader/core.py 88.28% <100.00%> (ø)

:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more