Open dcherian opened 1 year ago
Agreed this would be great to document thoroughly. See also this relevant issue + discussion in rioxarray https://github.com/corteva/rioxarray/issues/253
demonstrate relation between chunk size and computation time / number of tasks with a simple example?
- maybe even memory usage
This would be huge! This comes up often in Satpy where users want to process satellite images on their local machine but they only have 8GB or 16GB of memory. If someone can make a good diagram showing chunks being processed by a worker thread/process and how changing the size of all chunks or number of workers contributes to the overall memory usage that would be such a help when explaining this to users.
forgot to cc @rybchuk!
from the pangeo working meeting discussion with @mgrover1 @jmunroe @norlandrhagen
Here's an outline for an intermediate tutorial talking about dask chunking specifically for Xarray users
Motivation: why care about chunk size?
Keeping track
Why is it important to choose appropriate chunks early in the pipeline?
Specify chunks when reading data
chunks="auto"
.chunks
during data readopen_dataset
open_mfdataset