Closed torbjoernk closed 9 years ago
You could also have each MPI rank write to their own logging file... just need a tiny bit of extra logic to create unique logging file names for each rank. Might be a reasonable solution in the short term instead of having to get down and dirty with MPI-IO.
+1. No need to worry about this for now. Data output coming from PFASST++ is supposed to be low anyway, the spatial solver/application will take care of big volume data.
Have a logging file in plain text for each rank in time (and only one in space for that matter, e.g. each root per spatial communicator) should give you no more than 100ish files per run, each containing only some status messages.
On 07.04.15 18:49, Matthew Emmett wrote:
You could also have each MPI rank write to their own logging file... just need a tiny bit of extra logic to create unique logging file names for each rank. Might be a reasonable solution in the short term instead of having to get down and dirty with MPI-IO.
— Reply to this email directly or view it on GitHub https://github.com/Parallel-in-Time/PFASST/issues/166#issuecomment-90642731.
So I just did the first test run on JUQUEEN. Works really nice. Had to do a few tweaks here and there (going to make a PR for that). Now we just need a real-world example with space-only-communicators. :wink:
When running in parallel with MPI, the logging output should somehow be differently handled than in a serial environment. There come two options to my mind:
Option one is a low-hanging fruit and is easy to realize. The second option is probably the one to go for in the long run but requires some more work to get accomplished.
Comments?