NCAR / esmlab-regrid

ESMLab Regridding Utilities. ⚠️⚠️ ESMLab-regrid functionality has been moved into <https://github.com/NCAR/geocat-comp>. ⚠️⚠️
https://esmlab-regrid.readthedocs.io
Apache License 2.0
4 stars 5 forks source link

map file generation is slow and fails for big problems #5

Open matt-long opened 5 years ago

matt-long commented 5 years ago

I recently wanted to generate weights to map ETOPO1 (1-minute data) to 0.1° POP. The esmlab.regrid function failed.

I resorted to running ESMF_RegridWeightGen in MPI on 12 Cheyenne nodes.

#!/bin/bash
#PBS -N ESMF_RegridWeightGen
#PBS -q regular
#PBS -A NCGD0011
#PBS -l select=12:ncpus=36:mpiprocs=4:mem=109GB
#PBS -l walltime=06:00:00
#PBS -o logs/
#PBS -e logs/
#PBS -j oe

module purge
module load ncarenv/1.2
module load intel/17.0.1
module load netcdf/4.6.1
module load mpt/2.19

module load esmf_libs/7.1.0r
module load esmf-7.1.0r-ncdfio-mpi-O

SRC=/glade/work/mclong/esmlab-regrid/etopo1.nc
DST=/glade/work/mclong/esmlab-regrid/POP_tx0.1v3.nc

WEIGHT_FILE=/glade/work/mclong/esmlab-regrid/etopo1_to_POP_tx0.1v3_conservative.nc
METHOD=conserve

# Remove previous log files
rm -f PET*.RegridWeightGen.Log

mpirun -np 48 ESMF_RegridWeightGen --netcdf4 --ignore_unmapped -s ${SRC} -d ${DST} -m ${METHOD} -w ${WEIGHT_FILE}
andersy005 commented 5 years ago

The esmlab.regrid function failed.

I am curious to know what kind of error (MemoryError, etc) or is it just too slow?

matt-long commented 5 years ago

Pretty sure it was a memory error, but I don't recall the specific message. I had to use several nodes to get over the memory hurdle with MPI.

andersy005 commented 5 years ago

Per xesmf documentation: https://xesmf.readthedocs.io/en/latest/limitations.html

xESMF currently only runs in serial. Parallel options are being investigated.

https://github.com/JiaweiZhuang/xESMF/issues/3

I just found about it

matt-long commented 5 years ago

We are currently using xESMF, but don't have to. ESMPy does support MPI: http://www.earthsystemmodeling.org/esmf_releases/last_built/esmpy_doc/html/examples.html?highlight=mpi

though it's not clear how to integrate with dask.

andersy005 commented 5 years ago

though it's not clear how to integrate with dask.

Introducing MPI, ESMPy's complicated interface :) , integrating these with Xarray and Dask would definitely be a conundrum.

I am curious, what is the highest priority for esmlab-regrid? Is it usability? Performance? Do we want users to be able to perform regridding with one line of code? Because if usability is not the highest priority, it would be worth looking into MPI and ESMPy functionality

andersy005 commented 5 years ago

It looks like Dask's folks are looking into this kind of workflow: Running Dask and MPI programs together an experiment

andersy005 commented 5 years ago

@matt-long, Correct me if I'm wrong. This kind of parallelism is only needed when generating the weights. Once you have the weights, you don't need ESMPy/MPI machinery anymore. To apply the weights which is a matrix multiplication would be done without this heavy machinery, and this could be achieved with Scipy/Dask/Xarray, right?

matt-long commented 5 years ago

I think our focus should remain on an end-to-end workflow and usability in the near term, but keep performance thru parallelism on the radar.

We could consider prototyping an MPI implementation as a standalone script, analogous to that shown here.

@andersy005, you are correct. The weights files are sparse matrices and are handled well by scipy.sparse.

andersy005 commented 5 years ago

@matt-long, was the work you were doing to generate WEIGHT_FILE=/glade/work/mclong/esmlab-regrid/etopo1_to_POP_tx0.1v3_conservative.nc connected to the content of this notebook https://gist.github.com/matt-long/87630e97dc787ffc27b33e944dcd1473 ?

matt-long commented 5 years ago

Yes

andersy005 commented 5 years ago

Since you are not using xesmf and ESMF/ESMPy, and the code deals with raw NumPy, I was thinking of exploring some optimization with numba and dask. Do you see any value in this or am I missing anything before I end up going down a rabbit hole :) ?

matt-long commented 5 years ago

By "connected" I mean that that code was used in the same project. It does not compute the weight files, but rather only the grid file. It's fast enough as is, I'd say. Not a high priority for optimization.

andersy005 commented 5 years ago

By "connected" I mean that that code was used in the same project. It does not compute the weight files, but rather only the grid file.

Good point. Does this mean that the failing component is _gen_weights method?

https://github.com/NCAR/esmlab-regrid/blob/b8b71820e4807224cc52c16a68af4fbf405b4aa1/esmlab_regrid/core.py#L84-L88

matt-long commented 5 years ago

Yes.

andersy005 commented 5 years ago

Thank you for the clarification! Speaking of high priority, is there anything on your plate I can help with? :)

JiaweiZhuang commented 4 years ago

Not sure if related to JiaweiZhuang/xESMF#29. Parallel weight generation is very hard (if possible at all) to rewrite in a non-MPI way. But after the weights are generated, applying them to data using dask is much easier.

My plan is to clearly separate between "weight generation" and "weight application" phases:

Such separation will be much clearer after resolving JiaweiZhuang/xESMF#11. My plan is to have a "mini-xesmf" installation that doesn't depend on ESMPy -- it will just construct a complete regridder from existing weight files, generated from a ESMPy program running elsewhere (potentially a huge MPI run, potentially with a xesmf wrapper for better usability).