Short description explaining the high-level reason for the new issue.
Current behavior
Currently formulations need a csv forcing file for each catchment. We need to integrate the capability to drive all catchment using a single forcing file for all catchments.
Expected behavior
Forcing are regrided to provide per-catchment forcings
Proposed Solution
Solution 1
Each MPI rank reads part of a complete forcing file
Each rank broadcast its part of the forcing grid to all other ranks
Individual ranks then load pre stored regridding weights for each assigned catchment
Forcing are then regridded and/or aggregated for each catchment as needed
This approach requires a full forcing grid for the overall domain to be loaded in each mpi rank
This approach minimizes forcing I/O time as parallels NetCDF reads will never be for overlapping data segments
Solution 2
Each MPI rank reads the bounding box for each assigned catchment from a forcing file
Each rank then load precomputed weights for regridding. These weight are different than those in solution 1 as they will be for different matrix sizes.
Forcing are then regridded as needed
This approach minimized I/O time as forcing are not shared between MPI ranks
This approach requires more NetCDF reads 1 per catchment instead of 1 per MPI rank
NetCDF I/O time will be higher as parrallel reads will be in overlapping regions for at least some catchments
Do we want to call this done after #406 (and #417)? Or should this be used as the Epic for generalized gridded forcing processing to include use of a spatial coupler etc.?
Short description explaining the high-level reason for the new issue.
Current behavior
Currently formulations need a csv forcing file for each catchment. We need to integrate the capability to drive all catchment using a single forcing file for all catchments.
Expected behavior
Forcing are regrided to provide per-catchment forcings
Proposed Solution