OceanParcels / Parcels

Main code for Parcels (Probably A Really Computationally Efficient Lagrangian Simulator)
https://www.oceanparcels.org
MIT License
290 stars 125 forks source link

Internally compute Field derivatives #970

Open daanreijnders opened 3 years ago

daanreijnders commented 3 years ago

Parcels currently lacks automatic computation of the derivative of a Field. A use case is computing the gradient in a diffusivity field. In the AdvectionDiffusion kernels, this is now implemented by sampling the diffusivities around a particle (±dres in longitude/latitude) and computing a central difference from these sampled quantities. This implementations is suboptimal because:

Another way to currently use derivatives with Parcels is preprocessing the data and constructing derivative fields (e.g. using packages like xgcm), but preprocessing can be costly and requires the derivatives to be stored on disk. The extra I/O this requires when using Parcels is another bottleneck.

Proposed implementation and syntax

Derivatives can be computed under-the-hood as central differences on the gridded data. The locations at which these derivatives are defined are then the same as in an OGCM: for C-grids, the derivatives of tracers will be defined at the respective cell interface. The derivatives of velocities are located at the cell centers or at the f-points. Since the user currently needs to supply the f-points already, cell centers and the locations of the interface centers can be inferred. In A-grids, the derivative is simply located at the center between two points, which is easily computed.

Users should be able to choose how the derivative at the particle location is defined: nearest neighbor or linear interpolation. In the case of linear interpolation, the following figure shows a simple 2D example of which tracer cells should be accessed to define the zonal derivative at the particle's location (red) using a flat mesh:

To interpolate the derivative, the derivative values should in this case be computed at the locations of the 4 closest u-points (yellow), meaning that a total of 6 tracer cells are accessed. In 3D grids, this extends to 12 cells.

In curvilinear fields, a zonal or meridional derivative can be determined through computing the gridded horizontal gradient and projecting it in the meridional or zonal direction (to compute a gradient in 2D, 9 cells need to be accessed, since a derivative is computed in both horizontal directions).

I propose an implementation where a derivative can be readily accessed by kernels as an attribute of a Field, i.e.:

fieldset.field.derivative[dim, time, particle.depth, particle.lat, particle.lon]

where dim is a string indicating the dimension in which the derivative is taken, e.g. 'temporal', 'zonal', 'meridional', or 'vertical'.

hetland commented 3 years ago

I think that xgcm has a good model for calculating derivatives on structured grids. The key is to create a grid class that contains the metrics needed to do the differentiation.

However, in the case described above, I think it makes more sense to calculate derivatives with only the four surrounding points, as the derivative of a bilinear interpolation, which I think is something that parcels already uses. A disadvantage is that this will only work for single derivatives, as the 'stencil' is fixed to the four points.

I would also recommend naming the derivatives 'X', 'Y', etc, like xgcm does, since on a curvilinear grid, since 'X' might not necessarily be 'zonal'.

daanreijnders commented 3 years ago

Thanks for weighing in, @hetland. It's very important to discuss these points before we start any implementations.

Using the derivative of the bilinear interpolation indeed seems logical, especially in the context of velocities: say you want to compute the acceleration, then it makes sense that this acceleration matches the one 'felt' by the Lagrangian particles during advection. Still, having the ability to compute a gridded derivative field and using linear interpolation on those values can be useful too. I have two examples.

The first one is particles advected using a simple elliptical streamfunction field, where we advect particles using velocities computed by using (1) derivatives from the analytical expression, (2) linearly interpolating the gridded derivative, and (3) nearest-neighbor interpolating the gridded derivative (equivalent to using the derivative for the bilinear interpolation on the streamfunction itself):

elliptical streamfunction

Trajectories initialized at cross marks Here, linearly interpolating the gridded derivatives works much better.

Another example is computing a Redi-like diffusivity tensor from potential temperature and salinity to use as a diffusion parameterization. The Redi-tensor includes elements related to the slope of the isopycnals. To compute it, one must therefore take derivatives of these quantities that are usually readily available as model output. I performed an experiment similar to the one in Shah et al. (2011)[^1], releasing particles on an idealized isopycnal surface, letting them diffuse using a Redi-tensor computed using the gridded data, and measuring the error as the departure from the isopycnal over time. Again, linearly interpolating the derivatives performed much better than using the derivatives of the linear interpolation (=using nearest neighbour interpolation on the gridded derivative).

Perhaps it's good if a user can choose between linear and nearest neighbor interpolation of the gridded derivative. In the latter case, Parcels indeed only needs to compute this from the derivative of the bilinear interpolation using just the 4 grid points surrounding the particle.

Also, ideally it would be possible to take multiple derivatives. In the case of Redi-like diffusion in Lagrangian particle simulations, the gradient in the diffusivity tensor is also necessary.

As for @hetland's last point: I think we should strive for making Parcels aware of the orientation of grid cells, so it can project the horizontal gradient (computed in the grid directions, e.g. 'X' and 'Y') into the zonal and meridional direction, since those are much more useful for users to build kernels with.

[^1]: Shah, S. H. A. M., Heemink, A. W., & Deleersnijder, E. (2011). Assessing Lagrangian schemes for simulating diffusion on non-flat isopycnal surfaces. Ocean Modelling, 39(3–4), 351–361. https://doi.org/10.1016/j.ocemod.2011.05.008

hetland commented 3 years ago

Thanks for this thorough reply, @daanreijnders.

I think that a more difficult case to consider is interpolation of properties where gradients are strong. E.g., surface floats will congregate at fronts, and estimation of the gradients there could be underestimated if the effective stencil used is much larger than one grid cell. But I think your demonstration shows interpolating calculated derivatives gives smoother results, which might be desirable in some cases.

I wrote my original comment while I was actually planning to start an issue about using xarray (I will get to this soon), and I think this might be something to consider for this problem, too. For example, if I want to interpolate the Ertel PV to parcel trajectories, it might be easier to create a (lazily-evaluated) dataarray in xarray and add this as a field to the FieldSet. Calculating the Ertel PV is complex since it involves many derivative and interpolation operations, and may involve more infrastructure than you want to include in parcels. Leaning on an existing package to do these calculations would be much more efficient.

The field in question could also be geo-referenced and rotated to zonal and meridional coordinates, or remain in natural grid coordinates. I think that this approach could also be used to inform a diffusion scheme, like the one you reference.

daanreijnders commented 3 years ago

Great point about lazily evaluating fields. That would enable users to plug diagnostics (like Ertel PV) from e.g. xgcm directly into Parcels (I now see why you mentioned xgcm). If that can be done efficiently, it's really all I need for my purposes.

I'm curious to hear input from other users. I will discuss this with some colleagues in my group in the coming days.

Before you create a new issue, it might be good to check #462, which is still open.