xcompact3d / x3d2

https://xcompact3d.github.io/x3d2
BSD 3-Clause "New" or "Revised" License
3 stars 4 forks source link

Move dimensions to allocator #85

Closed Nanoseb closed 3 months ago

Nanoseb commented 4 months ago

closes #77

semi-h commented 4 months ago

The main change here is introducing new member variables and removing some of them, involving dirps_t, field_t, and allocator_t classes. Some of these are great and I completely agree, but there are a few I have some concerns.

There is also a new member n_padded in field_t but its not used anywhere yet, so not commenting on it.

JamieJQuinn commented 4 months ago

Seems like there are a few different definitions of n used here:

  1. Global n{x,y,z} - entire simulation domain across all nodes
  2. Local n{x,y,z} - local simulation domain unique to one node
  3. Padded local n{x,y,z} - actual size of array used to store local domain
  4. Pressure/velocity n{x,y,z} - either n or n-1 depending on staggeredness of variable

So questions to quell my quonfusion:

  1. What is the value intended to be stored in field_t%n and field_t%n_padded @Nanoseb? The actual size of the field, including staggeredness?
  2. What should tdsops%n contain @semi-h? It looks like it's always local, pressure grid n{x,y,z}. So we can't naively derive this from field_t%n because field_t%n can either be pressure or velocity size? Would having field_t%n help with error handling, e.g. if someone passes in the wrong staggeredness of array?
  3. Unsure why field_t%n wouldn't contain the right n for the halo exchange @semi-h. Can you elaborate?

FWIW it feels like having this information about field within field is sensible but I agree with @semi-h that it can't be misleading. I wonder if there's a better naming scheme we can use to distinguish between the above kinds of nx we have floating around.

slaizet commented 4 months ago

I think we should discuss about n and a proper naming scheme in our next meeting on Monday. We can get some inspirations from 2DECOMP&FFT.

Nanoseb commented 4 months ago

I think there are two different questions here, where are these variable stored and how they are called. The main purpose of this PR was to put everything in the fields. Most of the times when we act on fields we want to loop through them, so having all the sizes we would need at hand was the main drive for this work. That's the reason why, even if it leads to some duplication of quantities, I still think it makes sense to have it accessible via the field object.

Concerning the naming conventions, I agree this needs to be clarified/improved. One of the reason it isn't really clear is because get_block doesn't distinguish between velocity (n) and pressure (n+1) fields. Something we could do is, just like we do with the direction, to ask for either a cell based block or a vertex based one when requesting get_block. Requesting either will set the right value in n. That way, we can always assume that n is the size we need either in a pressure or velocity array.

As we mentioned in the related issue, the plan was to remove the _loc from n and add _global whenever we talk about global sizes. So any n here is local.

For SZ, I will have a think on how we could do it, but indeed, if it means we can't vectorise any loop anymore that's not ideal.

semi-h commented 4 months ago

So questions to quell my quonfusion:

...

  1. What should tdsops%n contain @semi-h? It looks like it's always local, pressure grid n{x,y,z}. So we can't naively derive this from field_t%n because field_t%n can either be pressure or velocity size? Would having field_t%n help with error handling, e.g. if someone passes in the wrong staggeredness of array?

tdsops%n is not actually always local pressure grid size. It depends on the type of operation the tdsopst instance is set to carry out. For example, we have a first derivative operator, der1st, and with this one we input velocity grid data and output velocity grid data, so in this case der1st%n is equal to the velocity grid. But sometimes we input velocity grid data but output on pressure grid, then, tdsops%n is the pressure grid size. Also, pressure grid is not always `n{velocity} - 1, it actually depends on the boundary condition and the location of the subdomain in the global domain. Only the subdomains at the end (w.r.t x, y, and z) might have a pressure grid sizen_{velocity} - 1`, the rest are identical to the velocity grid.

  1. Unsure why field_t%n wouldn't contain the right n for the halo exchange @semi-h. Can you elaborate?

If field_t%n is set to the velocity grid size, sending the last 4 layers of a pressure field to the next rank can be a problem. Lets assume we have a field with 256 velocity points along primary direction, and 255 pressure points. Then we might want to send pressure points 252 253 254 255 to the next rank. But if we have field_t%n=256, and take a slice like field%data(i, u%n - n_halo + j, k) (j from 1 to 4), we'll access data at 253 254 255 256, and this is not the last 4 layers of the pressure field.

I think there are two different questions here, where are these variable stored and how they are called. The main purpose of this PR was to put everything in the fields. Most of the times when we act on fields we want to loop through them, so having all the sizes we would need at hand was the main drive for this work. That's the reason why, even if it leads to some duplication of quantities, I still think it makes sense to have it accessible via the field object.

Concerning the naming conventions, I agree this needs to be clarified/improved. One of the reason it isn't really clear is because get_block doesn't distinguish between velocity (n) and pressure (n+1) fields. Something we could do is, just like we do with the direction, to ask for either a cell based block or a vertex based one when requesting get_block. Requesting either will set the right value in n. That way, we can always assume that n is the size we need either in a pressure or velocity array.

Passing an extra argument when calling get_block to set field_t%n correctly is an option, but this will only increase the complexity in my opinion. As of now the issue with different grid size is nicely hidden below in the backends, it works just fine and people working at solver level don't have to worry about it at all. Also, this can be a problem when we request a field at a level different than solver. For example we request intermediate storages in transeq, time integrator, and possibly more. If we plan to rely on field_t%n we would have to make sure that it is always correctly set every single time, this is not only error prone but also the correct value of n or accessing this info may not be available at all these levels.

Also, if we require user to specify a field wheter it is pressure or velocity, this is an extra responsibilty we give to a user. For example, to get a derivative in the staggared grid, user will have to take two distinct but corralated actions to get a single desired output. First specifying that the field is pressure or velocity, then calling tds_solve with the right tdsops_t instance. However we don't require this as of now and it works fine.

I think we need to focus on where exactly in the codebase we use the proposed members %SZ, %n, and %n_groups of field_t, and maybe consider alternative solutions. If their use is limited, then maybe big changes in the codebase (particularly around tds_solve) may not worth it. In the PR there is a loop in reorder_omp makes use of these members and its very helpful there, and also some of these members are used in copy_into_buffers, with minor changes. Of course this is only a draft and not complete, so what other subroutines you think these members will be useful?

Nanoseb commented 3 months ago

Going via an other route #91 so closing this PR