Closed tim-griesbach closed 6 months ago
This seems good to do testing. Before it gets merged, we may want to check whether we may use sc_io_write_all throughout and do not need to rely on the #ifdef P4EST_ENABLE_MPIIO anymore.
Let's just merge this. This code has been tested well by @tim-griesbach.
This seems good to do testing. Before it gets merged, we may want to check whether we may use sc_io_write_all throughout and do not need to rely on the #ifdef P4EST_ENABLE_MPIIO anymore.
Currently, the function sc_io_write_all
is not equivalent to MPI_File_write_all
. The reason for this was to keep the implementation of the case without MPI I/O but with MPI simple. Therefore, the function is equivalent to call sc_io_write_at_all
with the offset 0
. The functions sc_io_write_all
and sc_io_read_all
are currently both unused in our implementation of the first version of the file format.
In the implementation of the parallel writing of VTU files ForestClaw used
MPI_File_write
for distributed data, e.g. coordinates. There is the optimized and collective functionMPI_File_write_all
, which is optimized for writing parallel distributed data.This PR uses the function
MPI_File_write_all
in places where we write parallel distributed data. Moreover, we introduce a buffering mechanism for patches during writing VTU files. This is a side effect of ensuring that the collective functionMPI_File_write_all
is called equally many times on each rank. The buffering mechanism offers the variablepatch_threshold
infclaw2d_vtk_state_t
. By defaultpatch_threshold
is set to-1
, i.e. we buffer all patch data for eachfclaw2d_vtk_write_field
call and then write the data to disk. Alternatively, one can changepatch_threshold
in the VTK state to a number strictly greater0
and onlypatch_threshold
many patches are buffered and if the threshold is exceeded the data is written to disk.Before there was one writing operation per patch. Now this is not generally true anymore since the number of writing operations depends on the
patch_threshold
.We appreciate any feedback! In particular concerning the impact on the parallel writing performance.