Closed ax3l closed 2 months ago
Refs.:
As a side note, there are non-blocking MPI-I/O ops now coming to MPI (cutesy of Quincy Koziol's work on the standard). HDF5 is planning to use it to introduce async APIs as well.
In the future, that would allow a workflow similar as we have in ADIOS1/2: independent storage calls for chunks and maybe even attributes, with a collective call needed to kick off the writes/reads.
Other details:
H5Pget_mpio_actual_io_mode
and of not as expected, H5Pget_mpio_no_collective_cause
Things like accidental datatype conversions, dataspace conversion (both between declaration and write) and too small I/O requests (<FS block size) might break collectives into independent I/O.
As a side note, there are non-blocking MPI-I/O ops now coming to MPI (cutesy of Quincy Koziol's work on the standard). HDF5 is planning to use it to introduce async APIs as well.
In the future, that would allow a workflow similar as we have in ADIOS1/2: independent storage calls for chunks and maybe even attributes, with a collective call needed to kick off the writes/reads.
Side note, we are now testing OpenPMD with CACHE+ASYNC VOL connectors. Using the ASYNC VOL directly had some issues related to attributes that prevented the operations from being fully asynchronous and benefiting from it, but stacking it with the CACHE VOl seems to do the trick.
Note the API of Series::flush()
:
/** Execute all required remaining IO operations to write or read data.
*
* @param backendConfig Further backend-specific instructions on how to
* implement this flush call.
* Must be provided in-line, configuration is not read
* from files.
*/
void flush(std::string backendConfig = "{}");
I imagine that series.flush("hdf5.independent_stores = false")
would be a relatively straightforward API to expose this feature.
Although our API contract allows independent store/load calls (as in ADIOS), everything that is MPI-I/O based will essentially be performing better if the collective MPI-I/O calls are used.
Thus, the env control
OPENPMD_HDF5_INDEPENDENT
should be translated into an option for HDF5, so users that guarantee that they do collectivestoreChunk
calls can activate it programmatically for performance.Note that in MPI-I/O (and thus PHDF5) that means ranks with zero contribs need to issue zero-sized
storeChunk
s. The rationale behind that is that although an MPI rank might not contribute data, it might end up being a collection rank in MPI for collective data transport to disk.