Closed czender closed 9 years ago
Hi Charlie,
One modification to your call to GenerateCSMesh should be to add --alt in both cases. ACME uses a different ordering of nodes around each face than was originally implemented, and this may be remedied in a future release. Does this command sequence fail?
What do you mean by ne30np4 (pentagons)? All of the cubed-sphere meshes should be quadrilaterals.
Thanks for the NCO integration! I am very interested to see how it performs.
~ Paul
On Sun, Jun 14, 2015 at 10:02 AM, Charlie Zender notifications@github.com wrote:
Hola Paul et al,
NCO version 4.5.0 (latest stable release) works well with all SCRIP and ESMF-generated map-files that I have tested. Version 4.5.1-alpha1 (just now released) can apply (some) Tempest-generated map-files. Regridding rectangular lat-lon (RLL) meshes RLL<->RLL seems to work. I don't understand the parameters to generate correct cubed-sphere (CS) meshes, so I can't yet test CS<->CS and CS<->RLL.
I have ne120np4 and ne30np4 (pentagons) data from ACME, and am trying to regrid ne120np4 to ne30np4. Can Tempest do this? If so, please help me refine this Tempest procedure, which completes but issues many non-conservative warnings, and then has unexpected dimension sizes on both source and destination grids:
GenerateCSMesh --res 30 --file ${DATA}/rgr/msh_ne30.g GenerateCSMesh --res 120 --file ${DATA}/rgr/msh_ne120.g GenerateOverlapMesh --a ${DATA}/rgr/msh_ne120.g --b ${DATA}/rgr/msh_ne30.g --out ${DATA}/rgr/msh_ovr_ne120_to_ne30.g GenerateOfflineMap --in_mesh ${DATA}/rgr/msh_ne120.g --out_mesh ${DATA}/rgr/msh_ne30.g --ov_mesh ${DATA}/rgr/msh_ovr_ne120_to_ne30.g --in_type cgll --out_type cgll --in_np 4 --out_np 4 --out_map ${DATA}/maps/map_ne120np4_to_ne30np4.20150613.nc
I am out of my depth with CS FV/FE options. If Tempest cannot do this then please suggest a different unstructured grid test that I can perform with some CAM data.
This is also a request for any feedback on the NCO implementation, where ncks plays the role of Tempest ApplyOfflineMap. Source + docs at https://github.com/czender/nco/releases http://nco.sf.net/nco.html#regrid Especially interested in feature requests to better handle 0D, 1D, and grid-dependent variables, and in supporting awesome Tempest meshes.
Thanks! Charlie
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9.
With --alt the commands completed approximately as before, with similar warnings like these
....OfflineMap is not conservative in column 777598 (2.338265645186064e-32 / 2.298959801752191e-05)
....OfflineMap is not conservative in column 777599 (-6.121658933823181e-32 / 2.290116807872842e-05)
....OfflineMap is not conservative in column 777600 (-6.121658933823080e-32 / 2.290116807872857e-05)
....OfflineMap is not conservative in column 777601 (1.602671115628291e-31 / 2.281239541626004e-05)
Writing output
..Writing offline map
------------------------------------------------------------
By pentagons I mean nv_a and nv_b = 5 and the coordinates look like pentagons. I have also seen ne30 and ne120 quadrilateral SCRIP-format gridfiles, yet I think the ACME simulations I have been looking at use the pentagons. Mark Taylor could explain which grids ACME uses and why. And the gridsizes are not what I expect:
netcdf map_ne120np4_to_ne30np4.20150613 {
dimensions:
dst_grid_rank = 1 ;
n_a = 777602 ;
n_b = 48602 ;
n_s = 14045402 ;
nv_a = 1 ;
nv_b = 1 ;
src_grid_rank = 1 ;
Most of this looks right but NCO expects nv_a = nv_b = 5 for ACME files. For quadrilaterals, nv_a = nv_b = 4, yes? So what does nv_a = nv_b = 1 as above signify?
Assuming my ACME data really are pentagons, I would be happy to test on CS quadrilaterals but I have no CAM data for quadrilaterals. So an alternative if Tempest won't do the pentagons is to send me a Tempest recipe for CS quadrilaterals and a CAM output file for both source and destination grid. I do want to test Tempest CS and any other popular Tempest grids before looking at the regionally refined maps.
c
I can answer the "pentagons" question: This is not needed/used by Tempest. But the ESMF algorithm conservative mapping requires the dual grid to the SE nodes. The normal dual grid to a SE nodal cubed sphere grid would only contain triangles and quads. But if you construct such a grid, the spherical area wont match the GLL weight. To get them to match, we had to make some of the quads pentagons so we could tweak their areas a little bit.
On Sun, Jun 14, 2015 at 2:41 PM, Paul Ullrich notifications@github.com wrote:
Hi Carlie,
One modification to your call to GenerateCSMesh should be to add --alt in both cases. ACME uses a different ordering of nodes around each face than was originally implemented, and this may be remedied in a future release. Does this command sequence fail?
What do you mean by ne30np4 (pentagons)? All of the cubed-sphere meshes should be quadrilaterals.
Thanks for the NCO integration! I am very interested to see how it performs.
~ Paul
On Sun, Jun 14, 2015 at 10:02 AM, Charlie Zender <notifications@github.com
wrote:
Hola Paul et al,
NCO version 4.5.0 (latest stable release) works well with all SCRIP and ESMF-generated map-files that I have tested. Version 4.5.1-alpha1 (just now released) can apply (some) Tempest-generated map-files. Regridding rectangular lat-lon (RLL) meshes RLL<->RLL seems to work. I don't understand the parameters to generate correct cubed-sphere (CS) meshes, so I can't yet test CS<->CS and CS<->RLL.
I have ne120np4 and ne30np4 (pentagons) data from ACME, and am trying to regrid ne120np4 to ne30np4. Can Tempest do this? If so, please help me refine this Tempest procedure, which completes but issues many non-conservative warnings, and then has unexpected dimension sizes on both source and destination grids:
GenerateCSMesh --res 30 --file ${DATA}/rgr/msh_ne30.g GenerateCSMesh --res 120 --file ${DATA}/rgr/msh_ne120.g GenerateOverlapMesh --a ${DATA}/rgr/msh_ne120.g --b ${DATA}/rgr/msh_ne30.g --out ${DATA}/rgr/msh_ovr_ne120_to_ne30.g GenerateOfflineMap --in_mesh ${DATA}/rgr/msh_ne120.g --out_mesh ${DATA}/rgr/msh_ne30.g --ov_mesh ${DATA}/rgr/msh_ovr_ne120_to_ne30.g --in_type cgll --out_type cgll --in_np 4 --out_np 4 --out_map ${DATA}/maps/ map_ne120np4_to_ne30np4.20150613.nc
I am out of my depth with CS FV/FE options. If Tempest cannot do this then please suggest a different unstructured grid test that I can perform with some CAM data.
This is also a request for any feedback on the NCO implementation, where ncks plays the role of Tempest ApplyOfflineMap. Source + docs at https://github.com/czender/nco/releases http://nco.sf.net/nco.html#regrid Especially interested in feature requests to better handle 0D, 1D, and grid-dependent variables, and in supporting awesome Tempest meshes.
Thanks! Charlie
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9.
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-111874905 .
Hi Charlie,
The conservation warnings are not surprising; typically this occurs at very high resolutions when it can’t match machine epsilon exactly in the summation due to a large number of terms. There’s probably some algorithm I can use to improve this performance. It’s nonetheless worthwhile to try and apply the map just to see if conservation is actually maintained.
Is it possible to make NCO support both quadrilateral and pentagonal ACME grids?
~ Paul
On Jun 14, 2015, at 4:07 PM, Mark Taylor notifications@github.com wrote:
I can answer the "pentagons" question: This is not needed/used by Tempest. But the ESMF algorithm conservative mapping requires the dual grid to the SE nodes. The normal dual grid to a SE nodal cubed sphere grid would only contain triangles and quads. But if you construct such a grid, the spherical area wont match the GLL weight. To get them to match, we had to make some of the quads pentagons so we could tweak their areas a little bit.
On Sun, Jun 14, 2015 at 2:41 PM, Paul Ullrich <notifications@github.com mailto:notifications@github.com> wrote:
Hi Carlie,
One modification to your call to GenerateCSMesh should be to add --alt in both cases. ACME uses a different ordering of nodes around each face than was originally implemented, and this may be remedied in a future release. Does this command sequence fail?
What do you mean by ne30np4 (pentagons)? All of the cubed-sphere meshes should be quadrilaterals.
Thanks for the NCO integration! I am very interested to see how it performs.
~ Paul
On Sun, Jun 14, 2015 at 10:02 AM, Charlie Zender <notifications@github.com mailto:notifications@github.com
wrote:
Hola Paul et al,
NCO version 4.5.0 (latest stable release) works well with all SCRIP and ESMF-generated map-files that I have tested. Version 4.5.1-alpha1 (just now released) can apply (some) Tempest-generated map-files. Regridding rectangular lat-lon (RLL) meshes RLL<->RLL seems to work. I don't understand the parameters to generate correct cubed-sphere (CS) meshes, so I can't yet test CS<->CS and CS<->RLL.
I have ne120np4 and ne30np4 (pentagons) data from ACME, and am trying to regrid ne120np4 to ne30np4. Can Tempest do this? If so, please help me refine this Tempest procedure, which completes but issues many non-conservative warnings, and then has unexpected dimension sizes on both source and destination grids:
GenerateCSMesh --res 30 --file ${DATA}/rgr/msh_ne30.g GenerateCSMesh --res 120 --file ${DATA}/rgr/msh_ne120.g GenerateOverlapMesh --a ${DATA}/rgr/msh_ne120.g --b ${DATA}/rgr/msh_ne30.g --out ${DATA}/rgr/msh_ovr_ne120_to_ne30.g GenerateOfflineMap --in_mesh ${DATA}/rgr/msh_ne120.g --out_mesh ${DATA}/rgr/msh_ne30.g --ov_mesh ${DATA}/rgr/msh_ovr_ne120_to_ne30.g --in_type cgll --out_type cgll --in_np 4 --out_np 4 --out_map ${DATA}/maps/ map_ne120np4_to_ne30np4.20150613.nc
I am out of my depth with CS FV/FE options. If Tempest cannot do this then please suggest a different unstructured grid test that I can perform with some CAM data.
This is also a request for any feedback on the NCO implementation, where ncks plays the role of Tempest ApplyOfflineMap. Source + docs at https://github.com/czender/nco/releases http://nco.sf.net/nco.html#regrid Especially interested in feature requests to better handle 0D, 1D, and grid-dependent variables, and in supporting awesome Tempest meshes.
Thanks! Charlie
— Reply to this email directly or view it on GitHub <https://github.com/ClimateGlobalChange/tempestremap/issues/9 https://github.com/ClimateGlobalChange/tempestremap/issues/9>.
— Reply to this email directly or view it on GitHub <https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-111874905 https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-111874905> .
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-111883128.
I will have a look into why nv_a = nv_b = 1, but might not get a chance this week. This might be a bug in the output. I’m not sure if it’s actually needed for performing the remapping, since all it requires is that n_a matches the length of ncol in the input array.
Regarding quadrilateral data, I haven’t had a problem with using standard CAM output.
~ Paul
On Jun 14, 2015, at 4:04 PM, Charlie Zender notifications@github.com wrote:
With --alt the commands completed approximately as before, with similar warnings like these
....OfflineMap is not conservative in column 777598 (2.338265645186064e-32 / 2.298959801752191e-05) ....OfflineMap is not conservative in column 777599 (-6.121658933823181e-32 / 2.290116807872842e-05) ....OfflineMap is not conservative in column 777600 (-6.121658933823080e-32 / 2.290116807872857e-05) ....OfflineMap is not conservative in column 777601 (1.602671115628291e-31 / 2.281239541626004e-05) Writing output
..Writing offline map
By pentagons I mean nv_a and nv_b = 5 and the coordinates look like pentagons. I have also seen ne30 and ne120 quadrilateral SCRIP-format gridfiles, yet I think the ACME simulations I have been looking at use the pentagons. Mark Taylor could explain which grids ACME uses and why. And the gridsizes are not what I expect:
netcdf map_ne120np4_to_ne30np4.20150613 { dimensions: dst_grid_rank = 1 ; n_a = 777602 ; n_b = 48602 ; n_s = 14045402 ; nv_a = 1 ; nv_b = 1 ; src_grid_rank = 1 ; Most of this looks right but NCO expects nv_a = nv_b = 5 for ACME files. For quadrilaterals, nv_a = nv_b = 4, yes? So what does nv_a = nv_b = 1 as above signify?
Assuming my ACME data really are pentagons, I would be happy to test on CS quadrilaterals but I have no CAM data for quadrilaterals. So an alternative if Tempest won't do the pentagons is to send me a Tempest recipe for CS quadrilaterals and a CAM output file for both source and destination grid. I do want to test Tempest CS and any other popular Tempest grids before looking at the regionally refined maps.
c
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-111883027.
OK, that makes sense. I'll ignore the conservation warnings. The expected (mean absolute) rounding error from N=10^6 summations eats up 3-digits of precision so set epsilon >=1.0e-13 and this scales with sqrt(N).
Are you saying that the quadrilateral map that Tempest generates, and that I now have, should work as is with the ACME ne30np4 and ne120np4 output? If so, I'll go back and try that again.
If not, NCO already supports pentagonal (in fact) and quadrilateral (in theory) grids, and any shape gridcells on unstructured grids. I just don't have any CAM data to test nv_a = 4 unstructured grids. If you supply CAM data and a Tempest-recipe for the meshes, then I can test it. NCO annotates the regridded dataset with metadata, and, in particular, NCO uses nv_b to generate the cell boundary arrays from yv_b and it gets confused when nv = 1. I can turn-off the cell boundary arrays for Tempest map-file regridding once I have a map-file and data for validation. So if you supply those I'll be happy for awhile.
There's an issue with cell boundary arrays when the target grid is a spectral element (or finite element ) grid as opposed to a FV grid. In an FV grid, for each value produced it is associated with a well defined cell and the cell bounds are well defined. in a SE grid, each spectral element contains (for example) 4x4 grid of GLL nodes, and the values produced by the map are given at those nodes. But all 16 values belong to the sam element, we dont have little subelements that contain each GLL node. Further complicating things, the nodes on an edge or corner are shared by multiple elements. For each GLL node, we have a weight/area, but a tempest remap file will not contain cells around each GLL node.
ESMF mapping files are quite different. Since they only work FV to FV grids, we go through a complex process to construct the GLL dual grid, which has a cell containing each GLL node. It's difficult to construct these grids - requiring a lot of iterations to get the size of the cells to match the GLL weight/area variable.
Anyway, the point of all this is that the ESMF mapping files will contain (somewhat artificial) cell boundary information, but a tempest remap cant/wont.
Is it OK to ignore the cell edge information in the Tempest-generated CS map-files, and just use the weights to regrid the ACME simulations? I would do this with the mapfile generated by ne30 and ne120 np4 Tempest recipe discussed above. And I would apply it to the ACME ne30 and ne120 data. Would that constitute a valid demonstration of support for Tempest on unstructured grids? Sorry if this is underinformed, but fundamentally I'm still unsure whether Tempest gave me a map-file that's consistent (or consistent enough) to weight the data (forget about the interfaces for now) that I intend to apply it to. A "yes" or "no" would help here :)
Hi Charlie,
I'm not 100% sure about your question, but maybe my work with Tempest can help you.
I used Tempest to make mapping files from an unstructured CS grid (called CONUS) to a 1 degree RLL using the following commands:
./GenerateOverlapMesh --b 181x360_SCRIP.nc --a conusx4v1.g --out conus_on_181x360_mesh.g --method mixed
./GenerateOfflineMap --in_mesh conusx4v1.g --out_mesh 181x360_SCRIP.nc --ov_mesh conus_on_181x360_mesh.g --out_double --in_type cgll --out_type fv --in_meta conusx4v1np4_latlon.nc --out_map tempestmap_conus_on_181x360_meta.nc
The 1 degree RLL I used was generated using NCL (for the purpose of direct comparison between ESMF maps and Tempest maps, I wanted to use the same RLL grid). There are several options for "method" - the default option is "fuzzy", but the user can also specify "exact" and "mixed". I found this helped with eliminating the "not conservative" errors.
The --in_meta feature refers to what Mark T. was saying about the GLL dual grid - since I wanted a map to be compatible with ESMF. The "conusx4v1np4_latlon.nc" is the dual grid file. This is unnecessary for you.
I was able to use the resulting map file in application to data with both Tempest and ESMF. So I am inclined to answer your question with "yes", though there may be need of some tweaking.
EDIT: My local version of TempestRemap reorders indices for the --in_meta feature. So this feature won't work the same way with the main version.
After reading through this thread, I now think the simple answer is "no".
In your case, you have a mapping file for a Tempest cubed-sphere grid, and want to apply it to output generated by CAM-SE, which uses an internally generated cubed-sphere grid. So unless Tempest went through some effort to make sure the elements are numbered in the same order, and the GLL nodes within each element also use the same order/orientation (Paul has to answer that), I think the answer to your question is no - the tempest mapping file cant be applied to CAM-SE output. CPU timings would be correct, but the output would be scrambled.
To use the tempest remap algorithm on CAM-SE output, I think we need to follow Miranda's procedure, and we would need the metadata file that maps GLL nodes to elements ("conusx4v1.g in the above example). But we only have these for unstructured grids generated by SQuadGen or CUBIT, not for CAM-SE's built in cubed-sphere grids.
This appears to be a gap in our plan to migrate from bilinear and area averaged maps to Tempest maps.
Thanks, Miranda and Mark. Sounds like there is work to be done to "fill the gap". In the meantime, if it would be useful, I could still evaulate NCO's regridder on output consistent with Tempest's current CS maps. To do so, I would need a complete Tempest recipe to generate the mapfile, and a CS dataset (from any model, CAM or not) consistent with the source grid in the mapfile. A CS->RLL mapfile like Miranda's would be ideal. Otherwise please let me know when I can use Tempest to generate CS mapfiles that are consistent with the ACME CAM-SE data I already have.
Sorry for the slow reply. I'm on vacation in Canada and only occasionally checking email. I have designed Tenpest to use the same cubed sphere ordering as CAM SE when the cubed sphere grid is constructed with the --alt option. The map should be directly applicable to CAM SE output with the ApplyOfflineMap tool. Travis OBrien also uses the maps directly with one of his parallel Python codes.
~ Paul
On Jun 19, 2015, at 3:03 PM, Mark Taylor notifications@github.com wrote:
After reading through this thread, I now think the simple answer is "no".
In your case, you have a mapping file for a Tempest cubed-sphere grid, and want to apply it to output generated by CAM-SE, which uses an internally generated cubed-sphere grid. So unless Tempest went through some effort to make sure the elements are numbered in the same order, and the GLL nodes within each element also use the same order/orientation (Paul has to answer that), I think the answer to your question is no - the tempest mapping file cant be applied to CAM-SE output. CPU timings would be correct, but the output would be scrambled.
To use the tempest remap algorithm on CAM-SE output, I think we need to follow Miranda's procedure, and we would need the metadata file that maps GLL nodes to elements ("conusx4v1.g in the above example). But we only have these for unstructured grids generated by SQuadGen or CUBIT, not for CAM-SE's built in cubed-sphere grids.
This appears to be a gap in our plan to migrate from bilinear and area averaged maps to Tempest maps.
— Reply to this email directly or view it on GitHub.
I should have known Paul would have already done this!
That's good to hear. Paul, stay on vacation and off the grid, so to speak. I'll pick-up this thread next week.
Hi All,
I used Tempest to create (script below) ne120np4<->ne30np4 map-files for the purpose of applying them (with ncks) to CAM-SE output (from ACME). For comparison, I did the same with ESMF_RegridWeightGen. The Tempest mapfiles have the same n_a, n_b as ESMF, yet the Tempest n_S is approximately 10x greater than ESMF. I don't know what the extra information is, or how to apply it.
Tempest:
netcdf map_ne30np4_to_ne120np4_tps.20150618 {
dimensions:
dst_grid_rank = 1 ;
n_a = 48602 ;
n_b = 777602 ;
n_s = 14045402 ;
nv_a = 1 ;
nv_b = 1 ;
src_grid_rank = 1 ;
ESMF:
netcdf map_ne30np4_to_ne120np4_aave.20150603 {
dimensions:
dst_grid_rank = 1 ;
n_a = 48602 ;
n_b = 777602 ;
n_s = 1217786 ;
nv_a = 5 ;
nv_b = 5 ;
src_grid_rank = 1 ;
I must not understand some key dimension in the Tempest files. Is there a subloop over the GLL nodes? Something like that must be what allows Tempest to do better than first-order area accuracy, etc. The sad, sad truth is that I don't know how that's encoded/decoded (SCRIP has a num_wgts loop, for example). Can someone please steer me towards landfall? Some kind of equation in metacode for a destination gridpoint value would be peachy.
Thanks! c
GenerateCSMesh --alt --res 30 --file ${DATA}/rgr/msh_ne30.g
GenerateCSMesh --alt --res 120 --file ${DATA}/rgr/msh_ne120.g
GenerateOverlapMesh --a ${DATA}/rgr/msh_ne120.g --b ${DATA}/rgr/msh_ne30.g --out ${DATA}/rgr/msh_ovr_ne120_to_ne30.g
GenerateOfflineMap --in_mesh ${DATA}/rgr/msh_ne120.g --out_mesh ${DATA}/rgr/msh_ne30.g --ov_mesh ${DATA}/rgr/msh_ovr_ne120_to_ne30.g --in_type cgll --out_type cgll --in_np 4 --out_np 4 --out_map ${DATA}/maps/map_ne120np4_to_ne30np4_tps.20150613.nc
GenerateOverlapMesh --a ${DATA}/rgr/msh_ne30.g --b ${DATA}/rgr/msh_ne120.g --out ${DATA}/rgr/msh_ovr_ne30_to_ne120.g
GenerateOfflineMap --in_mesh ${DATA}/rgr/msh_ne30.g --out_mesh ${DATA}/rgr/msh_ne120.g --ov_mesh ${DATA}/rgr/msh_ovr_ne30_to_ne120.g --in_type cgll --out_type cgll --in_np 4 --out_np 4 --out_map ${DATA}/maps/map_ne30np4_to_ne120np4_tps.20150618.nc
The SCRIP format is just a sparse matrix representation, and n_s is the number of non-zero entries in the matrix.
so internally, for both mapping files, you must have a loop of the form:
do k = 1,n_s (SCRIP Indies start at 1, not 0) i=row(k) j=col(k) S(k) = value of the mapping matrix at (i,j) enddo
i and j will range over 1..n_a and 1..n_b
Hence if your code works with ESMF mapping files, it should work with tempest. The only problem we had was some of the Tempest metadata was not quite identical to SCRIP-format, but I think that's all been fixed.
Here's my NCL code to apply a mapping file: data_b = 0 do i=0,n_s-1 ic=col(i)-1 ir=row(i)-1 data_b(ir) = data_b(ir) + S(i)*data_a(ic) end do
Tempest just uses more data for the reconstruction.
Hi All,
Thanks for responding, Mark. NCO uses the same remapping algorithm, and it regrids ESMF, SCRIP, and Tempest RLL files fine. There is a bug either in the Tempest 1D mapfiles or in the NCO treatment of Tempest 1D mapfiles in particular (NCO does ESMF 1D->ND fine). I think I have tracked down the issue to its root cause.
The dimension size n_b could be used to determine 1D grid sizes, but n_b alone is insufficient for 2D grids, so NCO always uses dst_grid_dims to determine output dimension sizes. Tempest produces different values of dst_grid_dims and src_grid_dims than ESMF. When NCO uses Tempest values, the gods become angry. Values of dst/src_grid_dims from ESMF and Tempest mapfiles are below.
Is there a good explanation for why ESMF and Tempest differ? I inferred that ApplyOfflineMap utilizes n_b (not dst_grid_dims). Sure enough, ApplyOfflineMap fails when asked to apply ESMF 1D->1D mapfiles. Based on SCRIP definitions of dst/src_grid_dims, this seems like a Tempest bug.
Thanks for continued feedback and fixes! c
zender@roulee:/data/zender/maps$ ncks -H -v dst_grid_dims,src_grid_dims --cdl map_ne120np4_to_ne30np4_aave.20150614.nc netcdf map_ne120np4_to_ne30np4_aave.20150614 { dimensions: dst_grid_rank = 1 ; src_grid_rank = 1 ;
variables: int dst_grid_dims(dst_grid_rank) ;
int src_grid_dims(src_grid_rank) ;
data: dst_grid_dims = 48602 ;
src_grid_dims = 777602 ;
} // group / zender@roulee:/data/zender/maps$ ncks -H -v dst_grid_dims,src_grid_dims --cdl map_ne120np4_to_ne30np4_tps.20150613.nc netcdf map_ne120np4_to_ne30np4_tps.20150613 { dimensions: dst_grid_rank = 1 ; src_grid_rank = 1 ;
variables: int dst_grid_dims(dst_grid_rank) ; dst_grid_dims:name0 = "num_elem" ;
int src_grid_dims(src_grid_rank) ;
src_grid_dims:name0 = "num_elem" ;
data: dst_grid_dims = 5400 ;
src_grid_dims = 86400 ;
} // group /
Sounds like a Tempest bug. Part of the issue is a lack of clarity on how “src_grid_dims” and “dst_grid_dims” are defined, but I agree that the product of all dst_grid_dims should equal to n_b. Notably, I never use src_grid_dims or dst_grid_dims for finite element meshes, since they are completely unstructured and typically stored as 1D arrays. Currently there is no support for 2D finite element meshes.
I am working this week on re-writing the overlap grid generator since the existing code is slow, serial and lacks robustness. The code is a bit disorganized as a consequence, but I should be able to put in a fix for this issue.
~ Paul
On Jul 1, 2015, at 6:09 PM, Charlie Zender notifications@github.com wrote:
Hi All,
Thanks for responding, Mark. NCO uses the same remapping algorithm, and it regrids ESMF, SCRIP, and Tempest RLL files fine. There is a bug either in the Tempest 1D mapfiles or in the NCO treatment of Tempest 1D mapfiles in particular (NCO does ESMF 1D->ND fine). I think I have tracked down the issue to its root cause.
The dimension size n_b could be used to determine 1D grid sizes, but n_b alone is insufficient for 2D grids, so NCO always uses dst_grid_dims to determine output dimension sizes. Tempest produces different values of dst_grid_dims and src_grid_dims than ESMF. When NCO uses Tempest values, the gods become angry. Values of dst/src_grid_dims from ESMF and Tempest mapfiles are below.
Is there a good explanation for why ESMF and Tempest differ? I inferred that ApplyOfflineMap utilizes n_b (not dst_grid_dims). Sure enough, ApplyOfflineMap fails when asked to apply ESMF 1D->1D mapfiles. Based on SCRIP definitions of dst/src_grid_dims, this seems like a Tempest bug.
Thanks for continued feedback and fixes! c
zender@roulee:/data/zender/maps$ ncks -H -v dst_grid_dims,src_grid_dims --cdl map_ne120np4_to_ne30np4_aave.20150614.nc netcdf map_ne120np4_to_ne30np4_aave.20150614 { dimensions: dst_grid_rank = 1 ; src_grid_rank = 1 ;
variables: int dst_grid_dims(dst_grid_rank) ;
int src_grid_dims(src_grid_rank) ; data: dst_grid_dims = 48602 ;
src_grid_dims = 777602 ; } // group / zender@roulee:/data/zender/maps$ ncks -H -v dst_grid_dims,src_grid_dims --cdl map_ne120np4_to_ne30np4_tps.20150613.nc netcdf map_ne120np4_to_ne30np4_tps.20150613 { dimensions: dst_grid_rank = 1 ; src_grid_rank = 1 ;
variables: int dst_grid_dims(dst_grid_rank) ; dst_grid_dims:name0 = "num_elem" ;
int src_grid_dims(src_grid_rank) ; src_grid_dims:name0 = "num_elem" ; data: dst_grid_dims = 5400 ;
src_grid_dims = 86400 ; } // group /
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-117866617.
Thanks. I added a workaround for 1D grids so NCO does the right thing when it encounters Tempest files with this problem in the 1D grid description. Workaround is in 4.5.1-alpha8. Please consider fixing nv[ab] and [xy]v[ab] too. This would improve the utility of the regridded files.
FYI I implemented the changes necessary for NCO's regridder to understand Tempest 1D<->ND global mapfiles (so ACME cubed-sphere grids work), and also the changes necessary for NCO to understand a Tempest regionally refined (RR) mapfile (so far tested only on a CONUS mapfile). These improvements are all in NCO 4.5.1, just released. The release notes do not emphasize the regional grid capabilities because only a single mapfile, provided by Wuyin Lin, has been tested: map_conusx4v1_to_fv0.1x0.1_US_bilin.nc
I have no idea how representative this is of other RR grids/maps produced by Tempest or Squadgen. I want to test other RR grids/mapfiles that are qualitatively different than CONUS. If you have such a mapfile (and sample model data to regrid), please test it with ncks --map=map.nc in.nc out.nc and send me any feedback. Binaries in ~zender/bin/ncks on Yellowstone, Rhea, and (soon) Cooley. Or make your files available so I may test it. ncks has some useful features like it tries to add appropriate latitude weights and area when the mapfile provides none. However, I'm sure users will think of other features they want, and I'm eager to get the RR and regional features working well.
cz
Thanks Charlie!
I’m working my way through the feature requests for TempestRemap this week. I’ve uploaded two regionally refined meshes that we use in our work here:
http://climate.ucdavis.edu/tcforecast_60_x4.nc http://climate.ucdavis.edu/tcforecast_60_x4.nc
http://climate.ucdavis.edu/california_25km.nc http://climate.ucdavis.edu/california_25km.nc
The first is an Atlantic basin tropical cyclone mesh used by Colin Zarzycki. The second is a regionally refined California mesh used for California climate assessment.
Another interesting feature to explore would be the use of incomplete maps, such as we currently use for remapping only a portion of the sphere to a regional latitude-longitude grid for analysis. It would be interesting to test this feature for remapping from the atmosphere to the ocean, for instance. The code doesn’t work at present for mapping an incomplete mesh to a global mesh (I assume this would be needed for mapping ocean fluxes to the atmosphere).
~ Paul
On Jul 11, 2015, at 11:28 AM, Charlie Zender notifications@github.com wrote:
FYI I implemented the changes necessary for NCO's regridder to understand Tempest 1D<->ND global mapfiles (so ACME cubed-sphere grids work), and also the changes necessary for NCO to understand a Tempest regionally refined (RR) mapfile (so far tested only on a CONUS mapfile). These improvements are all in NCO 4.5.1, just released. The release notes do not emphasize the regional grid capabilities because only a single mapfile, provided by Wuyin Lin, has been tested: map_conusx4v1_to_fv0.1x0.1_US_bilin.nc
I have no idea how representative this is of other RR grids/maps produced by Tempest or Squadgen. I want to test other RR grids/mapfiles that are qualitatively different than CONUS. If you have such a mapfile (and sample model data to regrid), please test it with ncks --map=map.nc in.nc out.nc and send me any feedback. Binaries in ~zender/bin/ncks on Yellowstone, Rhea, and (soon) Cooley. Or make your files available so I may test it. ncks has some useful features like it tries to add appropriate latitude weights and area when the mapfile provides none. However, I'm sure users will think of other features they want, and I'm eager to get the RR and regional features working well.
cz
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-120651479.
Thanks for the meshes, Paul. These are just the kind of grids I wish to test. Unfortunately, they stump me. They look kind of like ESMF unstructured grid format meshes with different variable names. However, ESMF_RegridWeightGen chokes on them. Am I supposed to rename the variables/dimensions to match ESMF expectations? Tempest tools, as far as I know, require input meshes in .g format not .nc format. Is there a tool that converts between the two formats? c
Hi Charlie,
I haven’t tried those meshes in ESMF_RegridWeightGen myself although Miranda might have been more successful on that end. The files are technically Exodus (.g) format, but for some reason when I uploaded them as .g files Chrome refuses to download them. You could try renaming them as .g files and see if ESMF_RegridWeightGen is more amenable?
~ Paul
On Jul 15, 2015, at 1:47 PM, Charlie Zender notifications@github.com wrote:
Thanks for the meshes, Paul. These are just the kind of grids I wish to test. Unfortunately, they stump me. They look kind of like ESMF unstructured grid format meshes with different variable names. However, ESMF_RegridWeightGen chokes on them. Am I supposed to rename the variables/dimensions to match ESMF expectations? Tempest tools, as far as I know, require input meshes in .g format not .nc format. Is there a tool that converts between the two formats? c
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-121742691.
For those .g meshes to be used by ESMF_RegridWeightGen, you first have to run another program to generate the SCRIP file. (these .g files only have the element corners, not all the GLL nodes needed by ESMF). Or generate mapping files with TempestRemap and skip ESMF.
Charlie, for fun I will email you directly two other grids I have (in SCRIP format): A high-res geodesic grid (from MPAS) and a SE var-res grid with 8x refinement over Svalbard.
So...why didn't I realize that .g files are in netCDF format? This whole time I thought .g was some other binary format invented to complicate life. Now Paul's comment about renaming files makes sense. I just used Tempest to generate a global->california mapfile. The mapfile defines [xy]v[ab] but not [xy]c[ab]. Other Tempest mapfiles (e.g., ne30<->ne120) I have tried produce [xy]c[ab] but not [xy]v[ab]. Is there a rationale that NCO can depend on for when Tempest mapfiles will or will not have valid [xy][cv]_[ab]? NCO currently fails to regrid from global 2D to California unstructured, but hey, that's why we test these things. Unless someone suggests a better alternative, I will create [xy]c_b as an average of the [xy]v_b data.
Hi Charlie,
The mapfiles should contain both [xy]v[ab] and [xy]c[ab]. That must be a bug. Do you have a command line that can reproduce the issue?
~ Paul
On Jul 15, 2015, at 2:56 PM, Charlie Zender notifications@github.com wrote:
So...why didn't I realize that .g files are in netCDF format? This whole time I thought .g was some other binary format invented to complicate life. Now Paul's comment about renaming files makes sense. I just used Tempest to generate a global->california mapfile. The mapfile defines [xy]v[ab] but not [xy]c[ab]. Other Tempest mapfiles (e.g., ne30<->ne120) I have tried produce [xy]c[ab] but not [xy]v[ab]. Is there a rationale that NCO can depend on for when Tempest mapfiles will or will not have valid [xy][cv]_[ab]? NCO currently fails to regrid from global 2D to California unstructured, but hey, that's why we test these things. Unless someone suggests a better alternative, I will create [xy]c_b as an average of the [xy]v_b data.
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-121761597.
GenerateRLLMesh --lat 180 --lon 360 --file ${DATA}/rgr/msh_180x360.g
GenerateOverlapMesh --a ${DATA}/rgr/msh_180x360.g --b ${DATA}/grids/california_25km.nc --out ${DATA}/rgr/msh_ovr_180x360_to_ca25km.g
GenerateOfflineMap --in_mesh ${DATA}/rgr/msh_180x360.g --out_mesh ${DATA}/grids/california_25km.nc --ov_mesh ${DATA}/rgr/msh_ovr_180x360_to_ca25km.g --out_map ${DATA}/maps/map_180x360_to_ca25km.20150715.nc
ncks -M -m -H -v xc_b,yc_b ${DATA}/maps/map_180x360_to_ca25km.20150715.nc | more
Found the issue. This should be fixed now in branch "next-gen".
~ Paul
On Jul 15, 2015, at 3:26 PM, Charlie Zender notifications@github.com wrote:
GenerateRLLMesh --lat 180 --lon 360 --file ${DATA}/rgr/msh_180x360.g GenerateOverlapMesh --a ${DATA}/rgr/msh_180x360.g --b ${DATA}/grids/california_25km.nc --out ${DATA}/rgr/msh_ovr_180x360_to_ca25km.g GenerateOfflineMap --in_mesh ${DATA}/rgr/msh_180x360.g --out_mesh ${DATA}/grids/california_25km.nc --ov_mesh ${DATA}/rgr/msh_ovr_180x360_to_ca25km.g --out_map ${DATA}/maps/map_180x360_to_ca25km.20150715.nc ncks -M -m -H -v xc_b,yc_b ${DATA}/maps/map_180x360_to_ca25km.20150715.nc | more — Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-121767077.
Apologies for the delay. I just pushed out a fix that should put src_grid_dims and dst_grid_dims in line with the ESMF maps. Please verify.
~ Paul
On Wed, Jul 1, 2015 at 6:09 PM, Charlie Zender notifications@github.com wrote:
Hi All,
Thanks for responding, Mark. NCO uses the same remapping algorithm, and it regrids ESMF, SCRIP, and Tempest RLL files fine. There is a bug either in the Tempest 1D mapfiles or in the NCO treatment of Tempest 1D mapfiles in particular (NCO does ESMF 1D->ND fine). I think I have tracked down the issue to its root cause.
The dimension size n_b could be used to determine 1D grid sizes, but n_b alone is insufficient for 2D grids, so NCO always uses dst_grid_dims to determine output dimension sizes. Tempest produces different values of dst_grid_dims and src_grid_dims than ESMF. When NCO uses Tempest values, the gods become angry. Values of dst/src_grid_dims from ESMF and Tempest mapfiles are below.
Is there a good explanation for why ESMF and Tempest differ? I inferred that ApplyOfflineMap utilizes n_b (not dst_grid_dims). Sure enough, ApplyOfflineMap fails when asked to apply ESMF 1D->1D mapfiles. Based on SCRIP definitions of dst/src_grid_dims, this seems like a Tempest bug.
Thanks for continued feedback and fixes! c
zender@roulee:/data/zender/maps$ ncks -H -v dst_grid_dims,src_grid_dims --cdl map_ne120np4_to_ne30np4_aave.20150614.nc netcdf map_ne120np4_to_ne30np4_aave.20150614 { dimensions: dst_grid_rank = 1 ; src_grid_rank = 1 ;
variables: int dst_grid_dims(dst_grid_rank) ;
int src_grid_dims(src_grid_rank) ;
data: dst_grid_dims = 48602 ;
src_grid_dims = 777602 ;
} // group / zender@roulee:/data/zender/maps$ ncks -H -v dst_grid_dims,src_grid_dims --cdl map_ne120np4_to_ne30np4_tps.20150613.nc netcdf map_ne120np4_to_ne30np4_tps.20150613 { dimensions: dst_grid_rank = 1 ; src_grid_rank = 1 ;
variables: int dst_grid_dims(dst_grid_rank) ; dst_grid_dims:name0 = "num_elem" ;
int src_grid_dims(src_grid_rank) ; src_grid_dims:name0 = "num_elem" ;
data: dst_grid_dims = 5400 ;
src_grid_dims = 86400 ;
} // group /
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-117866617 .
this branch does not yet compile for me:
zender@roulee:/data/zender/tmp/tempestremap$ make
cd src; make
make[1]: Entering directory '/home/data/zender/tmp/tempestremap/src'
g++ -std=c++11 -O2 -g -I/usr/local/include -c -o build/OverlapMesh.o OverlapMesh.cpp
OverlapMesh.cpp: Dans la fonction ‘void GenerateOverlapMesh_v1(const Mesh&, const Mesh&, Mesh&, OverlapMeshMethod)’:
OverlapMesh.cpp:1218:17: note: #pragma message: OpenMP here
#pragma message "OpenMP here"
^
OverlapMesh.cpp: Dans la fonction ‘void GenerateOverlapMesh_v2(const Mesh&, const Mesh&, Mesh&, OverlapMeshMethod)’:
OverlapMesh.cpp:1880:44: erreur: operadores inválidos de tipos ‘char*’ y ‘std::nullptr_t’ para el binario ‘operator-’
int iTargetFaceSeed = ((char*)(pdata)) - nullptr;
^
Make.defs:22: recipe for target 'build/OverlapMesh.o' failed
make[1]: *** [build/OverlapMesh.o] Error 1
make[1]: Leaving directory '/home/data/zender/tmp/tempestremap/src'
Makefile:5: recipe for target 'all' failed
make: *** [all] Error 2
(compiler warnings/errors are in french/spanish...part of my environment)
I pushed a modification that should fix the issue. Some compilers don’t like taking the difference with a nullptr.
~ Paul
On Jul 17, 2015, at 5:57 PM, Charlie Zender notifications@github.com wrote:
this branch does not yet compile for me:
zender@roulee:/data/zender/tmp/tempestremap$ make cd src; make make[1]: Entering directory '/home/data/zender/tmp/tempestremap/src' g++ -std=c++11 -O2 -g -I/usr/local/include -c -o build/OverlapMesh.o OverlapMesh.cpp OverlapMesh.cpp: Dans la fonction ‘void GenerateOverlapMesh_v1(const Mesh&, const Mesh&, Mesh&, OverlapMeshMethod)’: OverlapMesh.cpp:1218:17: note: #pragma message: OpenMP here
pragma message "OpenMP here"
^
OverlapMesh.cpp: Dans la fonction ‘void GenerateOverlapMeshv2(const Mesh&, const Mesh&, Mesh&, OverlapMeshMethod)’: OverlapMesh.cpp:1880:44: erreur: operadores inválidos de tipos ‘char’ y ‘std::nullptrt’ para el binario ‘operator-’ int iTargetFaceSeed = ((char)(pdata)) - nullptr; ^ Make.defs:22: recipe for target 'build/OverlapMesh.o' failed make[1]: * [build/OverlapMesh.o] Error 1 make[1]: Leaving directory '/home/data/zender/tmp/tempestremap/src' Makefile:5: recipe for target 'all' failed make: * [all] Error 2 (compiler warnings/errors are in french/spanish...part of my environment)
— Reply to this email directly or view it on GitHub https://github.com/ClimateGlobalChange/tempestremap/issues/9#issuecomment-122462591.
Thanks for the fix. Recompiled and tested the california grid and both [xy]v[ab] and [xy]c[ab] are there.
Notably, [xy]v_[ab] are absent from finite element maps. I'm not sure how those would even be defined.
~ Paul
On Jul 18, 2015, at 9:08 AM, Charlie Zender notifications@github.com wrote:
Thanks for the fix. Recompiled and tested the california grid and both [xy]v[ab] and [xy]c[ab] are there.
— Reply to this email directly or view it on GitHub.
Hola Paul et al,
NCO version 4.5.0 (latest stable release) works well with all SCRIP and ESMF-generated map-files that I have tested. Version 4.5.1-alpha1 (just now released) can apply (some) Tempest-generated map-files. Regridding rectangular lat-lon (RLL) meshes RLL<->RLL seems to work. I don't understand the parameters to generate correct cubed-sphere (CS) meshes, so I can't yet test CS<->CS and CS<->RLL.
I have ne120np4 and ne30np4 (pentagons) data from ACME, and am trying to regrid ne120np4 to ne30np4. Can Tempest do this? If so, please help me refine this Tempest procedure, which completes but issues many non-conservative warnings, and then has unexpected dimension sizes on both source and destination grids:
I am out of my depth with CS FV/FE options. If Tempest cannot do this then please suggest a different unstructured grid test that I can perform with some CAM data.
This is also a request for any feedback on the NCO implementation, where ncks plays the role of Tempest ApplyOfflineMap. Source + docs at https://github.com/czender/nco/releases http://nco.sf.net/nco.html#regrid Especially interested in feature requests to better handle 0D, 1D, and grid-dependent variables, and in supporting awesome Tempest meshes.
Thanks! Charlie