Currently the chunk size is fixed to 10, which gave near optimal results on one test file (tas_..._20051130.nc) with data dimensions (52560, 145, 192). This equates to approx 1MB.
Determine the optimal value for data files with differing resolutions and numbers of dimensions. And attempt to approximate this with an automatic calculation which uses the shape, dtype, etc. to select a near optimal value.
NB. The optimal value will probably vary depending on the architecture (e.g. desktop vs. HPC). How much does this matter? Is there a convenient 80/20 trade-off? If not and we really do need architecture-dependent tuning, how simple can this be? A single number?
The evaluation engine has switched to a fixed 8MB buffer size instead of a fixed dimension length. This is good enough for now. We can re-open if something more specific is needed.
Currently the chunk size is fixed to 10, which gave near optimal results on one test file (
tas_..._20051130.nc
) with data dimensions (52560, 145, 192). This equates to approx 1MB.Determine the optimal value for data files with differing resolutions and numbers of dimensions. And attempt to approximate this with an automatic calculation which uses the shape, dtype, etc. to select a near optimal value.
NB. The optimal value will probably vary depending on the architecture (e.g. desktop vs. HPC). How much does this matter? Is there a convenient 80/20 trade-off? If not and we really do need architecture-dependent tuning, how simple can this be? A single number?