pysal / tobler

Spatial interpolation, Dasymetric Mapping, & Change of Support
https://pysal.org/tobler
BSD 3-Clause "New" or "Revised" License
144 stars 30 forks source link

WIP Native Dask implementation for area interpolation (Do not merge) #187

Open darribas opened 10 months ago

darribas commented 10 months ago

This is my first attempt at implementing area interpolation fully in Dask (as opposed to using the single-core logic inside the Dask scheduler). The main motivation for this is to obtain correct estimates for intensive/extensive variables, which currently are erroneous using the chunk-by-chunk code that does work for categorical (as discussed in #185 )

A couple of thoughts on what I learned:

https://github.com/pysal/tobler/blob/ce6fcb900b4290cd7bbec99236dd40a1baa98f0b/tobler/area_weighted/area_interpolate.py#L316-L321

This means we need to build that table of weights before we split things to each worker for independent work. Now, the weights can be built in parallel (this is essentially about filling in different parts of a matrix which are not all related to each other. This is what I attempt in this PR on id_area_table (which is copied from the single-core implementation, works in chunks and is added to the existing Dask computation graph. This returns a three-column Dask dataframe with source ID, target ID and shared area, where only non-zero values are stored (i.e., you never have a row in this table with a value of 0 in the third column. As @knaaptime suggests, this is not a million miles away from the new graph implementation. id_area_table, which builds the cross-over table, can be run in parallel with minimal inter-worker communication, it's performant and returns what is expected.

https://github.com/darribas/tobler/blob/5ce79e71a89fc4ede93ec22f6eb84b1acf884e4a/tobler/area_weighted/area_interpolate_dask.py#L346-L360

The logic of the code is as follows:

  1. Row standardisation: grab the area of intersections and divide each by the total area of each source geom. The area of each source geom can be taken by grouping by source id and applying a transform that sums it --> weights
  2. Use the resulting weights for each intersection area and join to them the values to be interpolated through the source IDs.
  3. For extensive variables, multiply the source values by the weights (this is multiplying two values in the same row of our table)
  4. Compile output geoms: group-by the table by target IDs and sum inside each group.
  5. Congratulations, you're done!

Now the issue with the logic above is that. although 1., 3., and potentially 4., are fast and very performant on Dask, 2. involves a significant amount of inter-worker communication and, since it is not done on sorted indices (that I can see), it will be slow.

This is where I stopped. A couple of further thoughts:

Very happy to explore further options and discuss alternative views!

knaaptime commented 10 months ago

The main problem comes next when we want to use that table to allocate source values into the target geography. Given we have the cross-over table (AoI in Eli's term) as an adjacency table, this is really a set of SQL-type joins and group-by operations. I attempt that here:

so what about just returning area in line 348 (which gives us the row-transformed values of table), without following on to build tmp. Converting this to a dataframe back in singlecore (no need to worry about geoms anymore), this gives us a multi-index series that we can convert directly to sparse (like Graph). This would also amount to a direct replacement for _area_tables_binning since it would result in a scipy.sparse, and multiplying through that is really cheap, so we dont need to worry about distributing it?

The bottleneck above in 2. does not apply if all the operation is performed in memory (e.g., as a single pandas.DataFrame or polars.DataFrame, so if we can live with in-memory computation only, I think this logic would speed up our current implementation

so i say we go ahead and stop here and move back to memory

darribas commented 9 months ago

That sounds good, I'd just be very clear on the documentation these approaches will not work with larger-than-memory data (which the released version for categoricals does). In reality, I expect most use cases to be fine with that, and it is also true that what we bring up in memory does not have geometries so it is significatly cheaper in terms of memory. But we should make it clear in the documentation?