Open francescopt opened 4 years ago
Ok, this is going to be tedious :neutral_face:
Opened branch bugfix/122-32b-buffer-size
@francescopt Can I use you test code in a test ?
Thanks
@francescopt Can I use you test code in a test ?
sure, go ahead
Things are more annoying than expected, since the 32 bit limit is an MPI standard limit (all buf sizes are specified as int). There might be a work-around (not using bytes as buffer types) but that won't be as trivial as replacing int with std::size_t in Boost.MPI.
The first "solution", since this problem appears for serialized type (I mean, for basic type, they're plain MPI issue, doesn't mean we should not deal with it at some point) and serialized types are typically not primitives. We could set a minimal archive size (like 8 or 16 bytes) and allocate the internal buffer with that type. To do that in a user friendly way (although we could templatize the slot size with a default value ?) we need a useful compromise between the maximum buffer size (std::numeric_limits<int>::max*sizeof(slot)
) and the minimal serialized message size (sizeof(slot)
).
Any idea ?
I just found out that the issue of size of MPI data is apparently known, and there is a C library that address this issue: BigMPI. In the github repository there is also an academical paper discussing the issue.
As for the minimum buffer size. I am not an expert, but could the page size be a good choice in terms of efficiency?
Page size looks a little big. Underlying implementation will use different types of communication (with remote buffer on on send for small, two steps (size+workload) for bigger coms etc.) The question could be: what is a good default for the max message size. That default could be documented as modifiable in config.hpp.
The long term fix could be different, for example it could be based on Probe.
OK, I guess that this is more a matter of individual cases. For what I have in mind, a max size of 64 Gb would be OK, that would be a buffer size of 64 Gb / (2^31) = 32 bytes.
I tried to experiment the maximum size of scattered data with MPI_Scatter
on a cluster I have access to. The test is done by defining custom MPI types with MPI_Type_contiguous
or MPI_Type_create_struct
: in both cases the program crashes on MPI_Scatter
when the total size exceeds 2 Gb. So I wonder if there is a more fundamental limitation...
The library crashes when performing a collective operation like
gather
when the size of the objects is very large, but that should be still manageable with supercomputers. The issue is not easy to reproduce, because it requires quite some memory available.The following programs illustrates this:
The program create an object of 1G of memory. The struct
huge
is defined so to force the library to have a non-primitive MPI type. When run with only 2 tasks, it crashes givingOn a supercomputer JUWELS, which has boost 1.69, the error is:
It appears that the programs crashes around this line on
gather.hpp
The same crash is found even running with 1 task.
Reducing the size of
huge
asmakes again the program crash, this time it appears that the crash happens at this line of
gather.hpp
My impression is that the sizes are sometimes stored as
int
when they should be asize_t
. For example, in the line above,oasizes
is astd::vector
ofint
: even if the single-object size fits into aint
, the total buffer of gathered objects could exceed 2^31.