omlins / ParallelStencil.jl

Package for writing high-level code for parallel high-performance stencil computations that can be deployed on both GPUs and CPUs
BSD 3-Clause "New" or "Revised" License
311 stars 31 forks source link

Heterogenous stencil formulations #78

Closed smartalecH closed 1 year ago

smartalecH commented 1 year ago

TLDR; can PS run multiple distinct kernels on different SMs within a hide_communication block?

One of the biggest advantages to finite-difference-based codes is the ability to add a wide variety of "physics" into the domain by spatially modifying the underlying stencil. For example, different materials may contain additional PDE terms (i.e. additional storage requirements) that are only needed at certain regions in the grid.

Computationally, this means we can define various kernels across the grid that handle the local effects.

Of course, we could just opt to use the most general kernel across the whole grid, but this significantly reduces the potential performance of the code.

Naively, we might just write a series of case statements within our global looping kernel to handle the heterogenous effects. However, extensive branching will severely hinder the performance of the code. Furthermore, it's difficult to handle local memory effects. In other words, we don't want to define large (and computationally dense) arrays who's data is actually quite sparse.

Instead, it's common practice to "chunk" the domain, such that each chunk uses the same stencil (i.e. the same underlying kernel). This way, arrays can be assigned to each chunk, rather than on the global domain.

In addition, GPUs can process different kernels simultaneously (e.g. using multiple SMs). Currently, PS uses this approach to handle the hide_communication feature. In some cases, it even updates the boundaries of the domain within the same timeframe (before a synch is performed).

In the ETHZ course, there is a small section describing a current limitation with this approach:

image

Is it possible to apply other, more complicated kernels here?

omlins commented 1 year ago

Is it possible to apply other, more complicated kernels here?

No, only kernels that act on the boundaries and do not depend on the inner point computations of the primary compute statement (update_T here). These are requirements to guarantee correct results. If we allowed for a second full computation kernel, then we would get wrong results if, for example, its boundary point computations depended on the inner computations of the first kernel. The thing is that the order of execution within the hide communication is not the expected if you had multiple full kernels.

That said, the rules could be weakened a bit in certain cases. However, it is key that the rules are very easily understandable as else users might get wrong results without even notifying...

Now in your case, you should be able to do all the you describe (using the @parallel_ async construct), but outside of a hide communication statement. This might be just fine if you can still apply hide communication on one of your heavier kernels.

smartalecH commented 1 year ago

Thanks for the quick reply, @omlins

These are requirements to guarantee correct results. If we allowed for a second full computation kernel, then we would get wrong results if, for example, its boundary point computations depended on the inner computations of the first kernel.

What if we were to (1) assume all kernel domains are disjoint (such that only boundaries of kernels talk) and (2) explicitly apply a halo around each kernel domain, even if that kernel was on the same device? This way we don't have to worry about "clobbering" each other's boundary points.

If we did this, we could place all "kernels" within the same halo update block, right?

The @parallel_async option looks promising. I'm just looking if we can leverage the best of both worlds (as asynchronous halo updates seem to be what helps ensure consistent weak scaling).

omlins commented 1 year ago

What if we were to (1) assume all kernel domains are disjoint (such that only boundaries of kernels talk) and (2) explicitly apply a halo around each kernel domain, even if that kernel was on the same device? This way we don't have to worry about "clobbering" each other's boundary points. If we did this, we could place all "kernels" within the same halo update block, right?

Sorry, I'm not sure I get what you mean.

The @parallel_async option looks promising. I'm just looking if we can leverage the best of both worlds (as asynchronous halo updates seem to be what helps ensure consistent weak scaling).

If you split your domain yourself in multiple kernels, then you might also be able to split off the boundary regions yourself explicitly; if so, you can yourself overlap the communication with computation using @parallel_async to launch the kernels.

smartalecH commented 1 year ago

If you split your domain yourself in multiple kernels, then you might also be able to split off the boundary regions yourself explicitly; if so, you can yourself overlap the communication with computation using @parallel_async to launch the kernels.

Got it, thanks 🙂