ECP-copa / Cabana

Performance-portable library for particle-based simulations
Other
197 stars 51 forks source link

Incorrect boundaryIndexSpace non-periodic physical boundaries indexes #615

Closed patrickb314 closed 10 months ago

patrickb314 commented 1 year ago

With physical boundaries, boundaryIndexSpace for direction 1 in a dimension has an extent of 3 not 2. This is incorrect for non-periodic boundaries, and if you iterate over the resulting index space, can can index outside of the valid indexes of the associated array view. A simple reproducer is shown below:

#include <Cajita.hpp>
#include <Kokkos_Core.hpp>
​
using MeshDeviceType = Kokkos::Device<Kokkos::Serial, Kokkos::HostSpace>;
​
int main( int argc, char* argv[] )
{
    int rank;
    MPI_Init( &argc, &argv );
    Kokkos::initialize( argc, argv );
​
    MPI_Comm_rank( MPI_COMM_WORLD, &rank );
​
    // Create the global mesh and grid. 4 cells, 5 node (non-periodic)
    // in each of 2 directions (i and j)
    std::array<double, 2> low = {-2.0, -2.0}, high = {2.0, 2.0};
    auto global_mesh = Cajita::createUniformGlobalMesh( low, high, 1.0 );
​
    Cajita::DimBlockPartitioner<2> partitioner; // Create Cajita Partitioner
    auto global_grid = Cajita::createGlobalGrid( MPI_COMM_WORLD, global_mesh,
                                                 {false, false}, partitioner );
    // Build the local grid.
    int halo_width = 2;
    auto local_grid = Cajita::createLocalGrid( global_grid, halo_width );
​
    // The resulting local grid should have 6 cells and 7 nodes

    // Now create an array layout for that.
    auto node_triple_layout =
            Cajita::createArrayLayout( local_grid, 3, Cajita::Node() );
​
    // Make an array and view from that layout
    auto array = Cajita::createArray<double, MeshDeviceType>(
                     "test array", node_triple_layout );
​
    // Check that the view has 9 entries in each direction
    for (int d = 0; d < 2; d++) {
        auto view = array->view();
        int extent = view.extent(d);
        if (extent != 9) {
            std::cout << "Extent of view in direction " << d << " is " << extent << "\n";
        }
    }

    // Now get the boundary index space in each direction.
    for (int i = -1; i <= 1; i++) {
        for (int j = -1; j <= 1; j++) {
            if (i == 0 && j == 0) continue;
​
            std::array<int, 2> dir = {i, j};
            auto boundary_space = local_grid->boundaryIndexSpace(Cajita::Ghost(),
                              Cajita::Node(), dir);
​
            for (int d = 0; d < 2; d++) {
                int bwidth = boundary_space.max(d) - boundary_space.min(d);
                if ((dir[d] != 0) && (bwidth != 2)) {
                    std::cout << "Width in direction "
                              << "{" << dir[0] << ", " << dir[1] << "}"
                              <<  " dim " << d << " is " << bwidth << " instead of 2.\n";
                }
            }
        }
    }
    array = NULL;
​
    Kokkos::finalize();
    MPI_Finalize();
    return 0;
}
streeve commented 1 year ago

Thanks for posting, sorry I'm behind on this. @kwitaechong can you take a look at this?

kwitaechong commented 1 year ago

@patrickb314 I think this is expected results. Please refer to the line 534 of Cajita_LocalGrid_impl.hpp. When Entity type is Node then the max value in the direction of 1 add extra halo width.

streeve commented 1 year ago

@kwitaechong I don't think so - that's what the code currently does, but we definitely need to fix it when it's inconsistent in different directions

patrickb314 commented 1 year ago

I don't know if its a consistency issue or not, but something is definitely wrong. In the test case I show above, Cajita is returns index spaces that will cause code to index beyond the valid bounds of a the view associated with a Cajita array. That surely cannot be correct, can it?

streeve commented 1 year ago

No, you're definitely right - we're encouraging indexing out of bounds and that's wrong. I meant consistency in that it's only in the upper bound and only in the Node

Edit: looks like Face and Edge have the same issue

streeve commented 1 year ago

@patrickb314 I believe the PR fixes the nodes - I'm still looking into the faces & edges

kwitaechong commented 1 year ago

@streeve I agree with your changes. But in this case we need to be careful when we do indexing, For example, when there is halo_width=2 in 1-D

0 1 2 3 4

shared_index_space( Ghost, Node, local, - 1 ).min() = 0, shared_index_space( Ghost, Node, local, - 1 ).max() = 2, -->excluding the boundary index 2 would be 0 1 shared_index_space( Own, Node, local, - 1 ).min() = 2, shared_index_space( Own, Node, local, - 1 ).max() = 4, -->excluding the boundary index 2 would be 3 4

So, when you do indexing over index space, we used to do for( i = shared_ghost_space.min(); i < shared_ghost_space.max(); ++i). Here for owned index case for Node, I think we need to do

for( i > shared_owned_space.min(); i <= shared_owned_space.max(); ++i)

patrickb314 commented 1 year ago

That seems less than ideal, too...

streeve commented 1 year ago

Yeah, we definitely don't want to change inclusive vs exclusive bounds

streeve commented 1 year ago

@patrickb314 just to update - we had more discussion on this and we know how we'll fix this in the linked PR