ocean-eddy-cpt / gcm-filters-paper

Manuscript on spatial filtering method
1 stars 0 forks source link

Eddy Momentum Example on MOM5 data #18

Closed LaureZanna closed 3 years ago

LaureZanna commented 3 years ago

As per my discussions with @iangrooms , @ElizabethYankovsky and @arthurBarthe , we will contribute the following

arthurBarthe commented 3 years ago

Ok I have one or two remaining questions about this after today's talk concerning this:

Edit: About the first question, I'm happy to think about it myself, I'm asking in case you already have some opinion about how you want this to be done.

iangrooms commented 3 years ago

I think for the "computational speed" section it's fine to compare a simple, non-tuned python implementation to a sophisticated C-code implementation as long as we state that in the text. We're not trying to argue that our method, if optimized to the hilt, is better than someone else's method also optimized to the hilt. We're trying to show that our code is both fast (compared to what a postdoc might try to build for themselves) and is a lot easier to use than coding something yourself.

We can deal with land in two ways using our filters: treat values on land as 0, or apply a no-flux boundary condition in our filter. For the purposes of comparing, it would be simple to use both our and your filter with land=0. It might also be interesting to compare our no-flux version to whatever happens to be built in to scipy.

I'm curious, what do you mean by "scipy's one" and "scipy's default"?

rabernat commented 3 years ago

On the topic of benchmarking, this blog post is a great read: https://matthewrocklin.com/blog/work/2017/03/09/biased-benchmarks

iangrooms commented 3 years ago

Nice post about fair benchmarking! In some sense we're not really benchmarking for the paper. Completely fair benchmarking compares different implementations of the exact same algorithm or of different algorithms for computing the exact same thing. Our speed tests are actually comparing different algorithms that compute different (but qualitatively similar) answers.

arthurBarthe commented 3 years ago

Hi, sorry for the late reply, had to digest some of the things you said! Thanks that helps. If I understand well, in the end the iterative-laplacian, even with target spec set to Gaussian, is not supposed to approximate the usual Gaussian filter anyway, so we're not interested in saying "it is faster" but rather it provides a filtering method that makes sense for gcms and is still computationally not expensive. Ok so right now the basic Gaussian filter I apply is just using the grid cell areas for weighting, and using zeros for continent data. Is it okay if I put that code in the gcm-filter package? I can put it under some separate file that would contain basic spatial filters. @iangrooms sorry I wasn't very clear, I meant scipy's default treatment of points outside the domain

rabernat commented 3 years ago

Is it okay if I put that code in the gcm-filter package?

This would require some more discussion. My understanding was that our package would implement and optimize this specific method for filtering. Doing standard convolution-based filtering would be a pretty big departure. Have you gotten your method working on gpu?

arthurBarthe commented 3 years ago

As you like! No but I can do that if needed

iangrooms commented 3 years ago

@arthurBarthe - which scipy package/function are you using? One of the image-processing tools?

I don't think we should put convolution-based filters in the gcm-filters package. But for reproducibility, if the code is used for the paper you can put it in the gcm-filters-paper repo. That's where I put my Fortran-based convolution code

arthurBarthe commented 3 years ago

Ok I'll do that thanks. I'm using both scipy.ndimage's gaussian filter function and my own python-level implementation right now.

iangrooms commented 3 years ago

quick comment about scipy.ndimage's gaussian filter: it appears that the only thing it can do is

  1. Fixed-factor filtering, not fixed-scale
  2. Setting values on land to 0 (cf no-flux in the Laplacian version)
  3. local regions, since it can't handle spherical geometry To make it conserve the integral you probably have to multiply by cell area before filtering and then divide by cell area after filtering. Also, to get a local region without contamination from the boundary the region has to be a subset of a bigger region, where the `halo' is about 3 standard deviations of the Gaussian.
iangrooms commented 3 years ago

Regarding the things "to do" in this issue, I'm leaning towards not saying much about which filter shape is best, or even comparing them. Instead saying one can use our method to implement whatever shape one prefers. So maybe we don't need

Implementations of the filters Gaussian, boxcar and Laplacian filters on a region of the ocean and analyze the effect of the filter shapes

Also, don't want to scoop Laure's in prep work on that subject.

To wrap the remaining two bullet points into a single example you might compute vorticity from smoothed velocity and compare to smoothed vorticity (showing non-commutation) in a region of the ocean, using the Laplacian filter. Then compare timings of the same thing using either scipy's ndimage gaussian filter or the other python-level version.

LaureZanna commented 3 years ago

@iangrooms : I am happy about not comparing filter shapes since I would prefer it to be a CPT-wide exercise (at least invite anyone interested).

Also, fine with doing vorticity - rather eddy momentum forcing- I have already already code for the vorticity on the MOM5 b-grid so we can do that.

We are still looking at u only for now since there are some convergence issues still with the laplacian filter but hopefully we will fix it soon.

ElizabethYankovsky commented 3 years ago

We have now gotten the Laplacian filter set up for MOM5. We will be testing to see if the filter commutes by comparing: (1) vorticity from filtered velocity fields and (2) directly filtered vorticity. We want to decide on a few points for performing this analysis:

iangrooms commented 3 years ago

The "fixed factor" scale filter is not expected to commute, so it might be more interesting to use the fixed-length version. Filtering from 1/4 to 1 degree (maybe ~100km filter scale) seems reasonable to me. It would be good to choose a domain with a lot of vorticity; maybe Kuroshio extension. The domain should be big enough that there is a region in the middle that is not contaminated by boundary effects

jakesteinberg commented 3 years ago

For reference, I've been using the fixed-length version (say for a filtering factor of 5... going from a 10km grid to 50km scale). I've also tried going from 1/4 to 1 degree. I'd be very interested to compare results and investigate implications of using one vs. the other.

iangrooms commented 3 years ago

PS, when I said

Filtering from 1/4 to 1 degree (maybe ~100km filter scale) seems reasonable to me.

I meant using something like a fixed scale of 100 km, not a fixed factor of 4.

ElizabethYankovsky commented 3 years ago

Hi everyone, @arthurBarthe and I are still making some changes to the Laplacian (@arthurBarthe has submitted a PR, we still need to sort out issues related to wet/dry cells and how that impacts conservation). Here are some preliminary plots of the region we are focusing on -- the western Pacific. The first plot is the vorticity computed from surface u and v velocities. The second plot is the filtered vorticity using a 100km length scale. The third plot is showing vorticity computed from filtered u and v velocities -- you can see that this gives the same results as the filtered vorticity.

vorticity

filtered_vorticity

vorticity_filtereduv

iangrooms commented 3 years ago

Can you plot the difference? We do expect these to give similar, but not exactly equal results.

Also, the iterated-Laplacian filters should give you answers over the whole domain so I don't understand the empty diamond-shaped regions in the filtered fields. What filter are you using?

NoraLoose commented 3 years ago

I think you would see the difference exactly in the empty diamond-shaped regions. These are the regions where land is less than n_steps grid cells away. Once the issue with the wet_mask is solved (see discussion in https://github.com/ocean-eddy-cpt/gcm-filters/pull/27), the differences will show in the (currently white) near coastal areas. You could also apply the IrregularCartesianLaplacianWithLandMask with the proper grid inputs, to avoid contamination close to land (the latter should be identical to the Laplacian you are using now but with proper handling of continents and islands).

ElizabethYankovsky commented 3 years ago

We had some issues with NaN values and masking, but have resolved those. Here is the revised version of the figures above. First plot is the unfiltered vorticity, second is the filtered vorticity, third is vorticity from filtered u,v fields, and fourth plot is the difference in the two filtered vorticity fields. As you can see, the difference is focused around coastal boundaries (as @NoraLoose mentioned in her last comment) vorticity1 vorticity2 vorticity3 vorticity4

ElizabethYankovsky commented 3 years ago

To correct the errors seen along the domain boundaries in the above plots, I repeated the filtering over a larger domain. Here are the same plots as above but with the boundary effects removed:

vorticity1 vorticity2 vorticity3 vorticity4

iangrooms commented 3 years ago

These plots look great @ElizabethYankovsky! In addition to illustrating that the filter doesn't commute near boundaries, it also gives an example where treating velocity components like scalars, which is technically a bad idea, is practically not a problem.

Can you add the code to generate these figures to the repository? We'd like to have the code here for the paper.

ElizabethYankovsky commented 3 years ago

@iangrooms thanks! I generated the figures using a notebook that I'll add to the relevant repository today. Is there a particular location that you would prefer this to be added to? Also, should a data file be included (we have been using 1 time snapshot)?

iangrooms commented 3 years ago

This is not a carefully organized repository, so I think just adding the notebook directly to the main folder is OK.

Good question about the data. Our POP temperature data is already 4 GB, and I imagine the MOM5 data is also fairly large (is it already on pangeo?). It might be best to put that on figshare, unless someone has a suggestion for a better place to put data for the paper. In the meantime I think just wait to upload any data.

ElizabethYankovsky commented 3 years ago

Ok, sounds good. And yes, I believe the data is already on pangeo (@arthurBarthe please correct me if I'm wrong).

ElizabethYankovsky commented 3 years ago

Also, to follow up on the earlier issues regarding speed testing. Now that we have the Laplacian and Gaussian filters working, we were wondering about two points:

arthurBarthe commented 3 years ago

Right now I think our data is quite small, we have only taken 10 time steps of surface velocities. But yes the data is available on pangeo otherwise, we could modify the code to download that same data from pangeo, although it would require the user to have set a project on Google Cloud Platform and a billing account.

rabernat commented 3 years ago

although it would require the user to have set a project on Google Cloud Platform and a billing account.

FWIW, we are hoping to soon be able to make these data fully public. For now you can point to the dataset URLs on https://catalog.pangeo.io/, e.g. https://catalog.pangeo.io/browse/master/ocean/GFDL_CM2_6/.

iangrooms commented 3 years ago

If the filter isn't already running on the GPU then I don't think there's a need to do a lot of work to get it on the GPU.

Also, to follow up on the earlier issues regarding speed testing. Now that we have the Laplacian and Gaussian filters working, we were wondering about two points:

* Is there a standardized way in which we should test the speeds of Laplacian vs. Gaussian filtering on MOM5 data to be consistent with what is being done for other kernels?

As long as the comparison is "fair" in the sense that the two methods are timed on the same CPU/platform, and that they're both using the same amount of parallelism, then I think that's all we need. I prefer to run each test 10 times and then report the mean, because there is usually some variability.

* Should we get our filter working on GPU?

If the code isn't already on the GPU then I don't think there's a need to go to the effort of putting it on the GPU. I wouldn't want to make you do a lot of work on something that you won't end up using later on.

iangrooms commented 3 years ago

Right now I think our data is quite small, we have only taken 10 time steps of surface velocities. But yes the data is available on pangeo otherwise, we could modify the code to download that same data from pangeo, although it would require the user to have set a project on Google Cloud Platform and a billing account.

I don't think you need to set up the notebook to download the data. Just putting in a link (from @rabernat's comment) and maybe a comment telling the user that they have to register is OK. If this were testing data for the gcm-filters package I think we might want to be more user-friendly, but for the paper I think it's enough to just say the data is available and point to it.

LaureZanna commented 3 years ago

For the data: We will use add the same link we have in @arthurBarthe's paper which points to Pangeo. For the filters: we are only set up on CPU, not GPU yet.

NoraLoose commented 3 years ago

@ElizabethYankovsky, @arthurBarthe: I think I have some ideas for how we could make your MOM5 Laplacian filters a touch faster - essentially by moving calculations like

0.5 * (np.roll(self.dyt, -1, axis=-1) + np.roll(self.dyt, (-1, -1), axis=(0, 1)))

to the __postinit_ method, rather than re-doing them in the _call_ method for every Laplacian iteration. On the other hand, the speed-up might be hardly noticeable if you choose a small n_steps because, as of now, the initialization of the Laplacian is still happening inside filter_func. So I'm not sure if it's worth making the change - let me know what you think. I just don't want to undersell the speed of our Laplacian filters. :)

arthurBarthe commented 3 years ago

@iangrooms About the code do you want it to be self-contained in this repo or is it ok to refer to the gcm-filters repo?

About GPU, I can take a look, it should not be much of a challenge as we use functions common to numpy / pytorch (@NoraLoose are you using pytorch?)

@NoraLoose Yes, I remember you also saying that due to the similarities between the POP and MOM5 laplacians we could use a common class from which both filters would inherit these init steps. Is that still something in the loop?

@rabernat Great to hear that the data is going to become public :)

iangrooms commented 3 years ago

@iangrooms About the code do you want it to be self-contained in this repo or is it ok to refer to the gcm-filters repo?

It's preferred to link to the gcm-filters repo, and I expect to cite that repo even if we don't have a JOSS paper yet (which I expect we won't).

NoraLoose commented 3 years ago

About GPU, I can take a look, it should not be much of a challenge as we use functions common to numpy / pytorch (@NoraLoose are you using pytorch?)

Are you asking about the gcm-filters code? It should work "out of the box" on GPU because we have set up the package to act on Dask arrays with either numpy or cupy blocks. Here is an example of how to use gcm-filters on GPU.

arthurBarthe commented 3 years ago

Thanks Nora! @iangrooms Right now the example we have shown you is with fixed-length filtering. To compare with scipy's gaussian filter does it make sense to use a fixed factor for the latter?

iangrooms commented 3 years ago

To compare with scipy's gaussian filter does it make sense to use a fixed factor for the latter?

Yes, that makes sense. Just to be clear, there's no need to change the figures associated with this issue/example. But for the speed tests comparing Laplacian vs scipy/Gaussian I think it would make sense to use the fixed factor version of the Laplacian.

arthurBarthe commented 3 years ago

Ok that makes sense, both fixed factor for the speed test. Thanks!

NoraLoose commented 3 years ago

@NoraLoose Yes, I remember you also saying that due to the similarities between the POP and MOM5 laplacians we could use a common class from which both filters would inherit these init steps. Is that still something in the loop?

I would really love to discuss this soon, so that we can target our future efforts efficiently (rather than several people writing the same Laplacian multiple times).

@rabernat suggested to integrate xgcm into the gcm-filters package. If we have xgcm in there, we may be able to use only one scalar Laplacian for all regional model data (no matter if from POP, MOM etc), one scalar Laplacian for all tripolar grids (POP and MOM5 could use the exact same one), one vector Laplacian for all C-grid models, and so on.

NoraLoose commented 3 years ago

Ok that makes sense, both fixed factor for the speed test.

@arthurBarthe, I think the CARTESIAN_WITH_LAND Laplacian is ready to go for your fixed factor MOM5 filter tests. If you want to do global (including tripolar exchanges across the northern boundary seam), you could use the POP_SIMPLE_TRIPOLAR_T_GRID Laplacian. Maybe this notebook will be helpful for you - it contains speed tests for these Laplacians using POP data.

LaureZanna commented 3 years ago

@NoraLoose : I like the way you think! I was saying the same to @arthurBarthe and @ElizabethYankovsky - no need to write many times the same thing, especially if when have similar grids. The vector laplacian for MOM5 (where we initially started) took some effort but if you have done it for POP then it should be similar for MOM5.

iangrooms commented 3 years ago

Yes, since their example is for velocity we don't actually have a solution on the tripole at this point. But I don't think we need to wait for that for their current example, which is just north Pacific and doesn't feel the tripole.

arthurBarthe commented 3 years ago

@NoraLoose happy to discuss that soon as well!

I'll probably use CARTESIAN_WITH_LAND and stay away from the northern boundary for now

NoraLoose commented 3 years ago

@NoraLoose : I like the way you think! I was saying the same to @arthurBarthe and @ElizabethYankovsky - no need to write many times the same thing, especially if when have similar grids.

Thanks @LaureZanna! I have been trying to say this multiple times, and to encourage the use of what is already there in order to save everyone's time. But it may have gone lost in the sea of github comments. :smiley:

LaureZanna commented 3 years ago

my apologies for having missed it before @NoraLoose and if we created more work for you. I am indeed struggling with the enormous amount of messages/issues in GitHub etc but we will get there! Thanks again for helping us.

rabernat commented 3 years ago

The question of whether to use the same code involves tradeoffs and time scales. I think we all agree that duplication should be avoided and we should try to make the kernels as generic as possible. But doing this requires coordination and communication among the team, which is a different form of effort. In my experience, generalizing a code to work with different models, despite their similarity, opens up many new challenges.

If we have xgcm in there, we may be able to use only one scalar Laplacian for all regional model data (no matter if from POP, MOM etc), one scalar Laplacian for all tripolar grids (POP and MOM5 could use the exact same one), one vector Laplacian for all C-grid models, and so on.

This is indeed the dream! But spending time on this means not spending time on the specific examples for the paper.

One idea for the short term would be to try to use the gcm_filters package for our filtering, but with different kernels coded specifically for the different models, as is being implemented currently in https://github.com/ocean-eddy-cpt/gcm-filters/pull/27, https://github.com/ocean-eddy-cpt/gcm-filters/pull/29, etc. If these all have good tests, then we can later refactor the package to use a more generic laplacian, plus xgcm to parse the grid information from the inputs.

I think a quick call would go a long way towards coordinating these efforts. Do people have time for something like that next week?

ElizabethYankovsky commented 3 years ago

I've added the notebook used in creating the above figures. I agree that we should coordinate the vector Laplacian development, would be great to have a call about this next week.

arthurBarthe commented 3 years ago

@iangrooms I'm trying to have a figure with the timings with a few values of n_steps for x axis (and some "comparable" parameter for scipy's gaussian filter, truncate), and the times on the y axis as well as the error from the "true" gaussian filtered field (computed with scipy's gaussian filter using a high value of truncate). I need the scales of the two ways to do the filtering to have the same meaning, I remember you changed the definition at some point, right now I'm using sigma (unitless factor, say 8) for the gcm-filter function, and sigma / 2 for scipy's (their definition of the scale is the standard deviation of a gaussian probability density). Doing this, the difference between the two methods converges to a non-zero value, so I think I must have misunderstood the definition of the filter scale in gcm-filter. So my question finally, how do you define the scale of a gaussian filter in gcm-filters? Thanks!

iangrooms commented 3 years ago

@arthurBarthe we defined filter_scale so that it would give similar results across Gaussian, boxcar, and Taper filters. We used a boxcar of width L (not half-width) as the baseline. The way to get a Gaussian that 'matches' a boxcar of width L is to have the standard deviation of the Gaussian be L/\sqrt{12}. If you use a "filter scale" of L in the gcm-filters package it corresponds to a standard deviation of L/\sqrt{12}.

Separately, I don't want to call scipy's Gaussian filter the "true" Gaussian filter because it's assuming a regular Cartesian mesh, it doesn't conserve the integral[footnote], and it has a very ad hoc treatment of boundaries. It is of course still interesting to see how different our fixed-factor Gaussian filter is from scipy's Gaussian, but I would not call the difference an "error." I still expect the difference to converge to a nonzero value even with the standard deviations matched, because the filters are simply computing different things (especially near boundaries).

[footnote] If you first multiply by the cell areas, then filter, then divide by cell areas, scipy's Gaussian filter might conserve the integral, provided you integrate over the whole domain including land.

NoraLoose commented 3 years ago

Separately, I don't want to call scipy's Gaussian filter the "true" Gaussian filter because it's assuming a regular Cartesian mesh, it doesn't conserve the integral[footnote], and it has a very ad hoc treatment of boundaries. It is of course still interesting to see how different our fixed-factor Gaussian filter is from scipy's Gaussian, but I would not call the difference an "error." I still expect the difference to converge to a nonzero value even with the standard deviations matched, because the filters are simply computing different things (especially near boundaries).

I suppose the way scipy's Gaussian filter handles continental boundaries is most similar to our CARTESIAN Laplacian: simply ignore them, and spread some land fill value into the ocean. The figure I posted yesterday uses the CARTESIAN_WITH_LAND Laplacian for the middle panel, and the CARTESIAN Laplacian for the right panel:

SST_boundary_unaware

@arthurBarthe, if it's not too much work you could therefore include both the CARTESIAN_WITH_LAND and CARTESIAN in your speed tests. In my experience, the CARTESIAN (ignoring continental boundaries) is about 3 times faster than the CARTESIAN_WITH_LAND (handling continental boundaries correctly), at least on CPU.