openPMD / openPMD-api

:floppy_disk: C++ & Python API for Scientific I/O
https://openpmd-api.readthedocs.io
GNU Lesser General Public License v3.0
138 stars 51 forks source link

HDF5 Option for OPENPMD_HDF5_INDEPENDENT #1513

Closed ax3l closed 2 months ago

ax3l commented 1 year ago

Although our API contract allows independent store/load calls (as in ADIOS), everything that is MPI-I/O based will essentially be performing better if the collective MPI-I/O calls are used.

Thus, the env control OPENPMD_HDF5_INDEPENDENT should be translated into an option for HDF5, so users that guarantee that they do collective storeChunk calls can activate it programmatically for performance.

Note that in MPI-I/O (and thus PHDF5) that means ranks with zero contribs need to issue zero-sized storeChunks. The rationale behind that is that although an MPI rank might not contribute data, it might end up being a collection rank in MPI for collective data transport to disk.

ax3l commented 1 year ago

Refs.:

ax3l commented 1 year ago

As a side note, there are non-blocking MPI-I/O ops now coming to MPI (cutesy of Quincy Koziol's work on the standard). HDF5 is planning to use it to introduce async APIs as well.

In the future, that would allow a workflow similar as we have in ADIOS1/2: independent storage calls for chunks and maybe even attributes, with a collective call needed to kick off the writes/reads.

ax3l commented 1 year ago

Other details:

Things like accidental datatype conversions, dataspace conversion (both between declaration and write) and too small I/O requests (<FS block size) might break collectives into independent I/O.

jeanbez commented 1 year ago

As a side note, there are non-blocking MPI-I/O ops now coming to MPI (cutesy of Quincy Koziol's work on the standard). HDF5 is planning to use it to introduce async APIs as well.

In the future, that would allow a workflow similar as we have in ADIOS1/2: independent storage calls for chunks and maybe even attributes, with a collective call needed to kick off the writes/reads.

Side note, we are now testing OpenPMD with CACHE+ASYNC VOL connectors. Using the ASYNC VOL directly had some issues related to attributes that prevented the operations from being fully asynchronous and benefiting from it, but stacking it with the CACHE VOl seems to do the trick.

franzpoeschel commented 1 year ago

Note the API of Series::flush():

    /** Execute all required remaining IO operations to write or read data.
     *
     * @param backendConfig Further backend-specific instructions on how to
     *                      implement this flush call.
     *                      Must be provided in-line, configuration is not read
     *                      from files.
     */
    void flush(std::string backendConfig = "{}");

I imagine that series.flush("hdf5.independent_stores = false") would be a relatively straightforward API to expose this feature.