openPMD / openPMD-api

:floppy_disk: C++ & Python API for Scientific I/O
https://openpmd-api.readthedocs.io
GNU Lesser General Public License v3.0
142 stars 51 forks source link

file based bp5 writer hang #1655

Open guj opened 3 months ago

guj commented 3 months ago

Describe the bug The recent optimization breaks a MPI use case when in file based mode. A minimal code is included below. One can use 2 ranks to see effect. In short, at the second flush, rank 1 has nothing to contribute, so it didn't call BP5 while rank 0 did. In essence, BP5 write is collective. So rank 0 hangs because inactivity of rank 1. If we use variable based, it looks like a flush to ADIOS is forced (by openPMD-api? ) on all ranks and so it works.

To Reproduce c++ example:

include <openPMD/openPMD.hpp>

include

include

include

using std::cout; using namespace openPMD;

int main(int argc, char *argv[]) { int provided; MPI_Init_thread(&argc, &argv, MPI_THREAD_MULTIPLE, &provided);

int mpi_size;
int mpi_rank;

MPI_Comm_size(MPI_COMM_WORLD, &mpi_size);
MPI_Comm_rank(MPI_COMM_WORLD, &mpi_rank);

auto const value = float(mpi_size*100+mpi_rank);
std::vector<float> local_data(10 * 300, value);

std::string filename = "ptl_%T.bp";
//std::string filename = "ptl.bp";  //this is variable based and it works                                                                                                        

Series series = Series(filename, Access::CREATE, MPI_COMM_WORLD);

Datatype datatype = determineDatatype<float>();

auto myptl = series.writeIterations()[1].particles["ion"];
Extent global_ptl = {10ul * mpi_size * 300};
Dataset dataset_ptl = Dataset(datatype, global_ptl, "{}");
myptl["charge"].resetDataset(dataset_ptl);

series.flush();

if (mpi_rank == 0)     // only rank 0 adds data                                                                                                                                  
    myptl["charge"].storeChunk(local_data, {0}, {3000});

series.flush(); // hangs here                                                                                                                                                    
MPI_Finalize();

return 0;

}

Software Environment

Additional context

pgrete commented 3 months ago

I think I came across sth similar last week (but I actually got an error instead of a hang). The issue was that I was also calling storeChunk with data vector whose .data() was a nullptr (but I also passed an extent of 0 to storeChunk).

franzpoeschel commented 3 months ago

Hello @guj and @pgrete this behavior is known and can only be fully fixed once we transition to flushing those Iterations that are open rather than those that are modified. It was not the recent optimization that broke this, rather BP5 is just much stricter with collective operations so this behavior is more likely to occur now. Until this is fully solved, please use the workaround implemented in https://github.com/openPMD/openPMD-api/pull/1619:

    series.writeIterations()[1].seriesFlush();

This is guaranteed to flush Iteration 1 on all ranks regardless if it is modified or not. Also, your example is missing a call to series.close() before MPI_Finalize().

guj commented 3 months ago

Thanks Franz., the work around works.