First this avoids the integer overflow in the displs array of the gatherv by replacing them with send/recv pairs.
Second, the sparsity pattern is now assumed to be global on the MUMPS communicator. I think this is reasonable for the multiperiod and hopefully in more general terms. So for multiperiod you must have at least as many MUMPS processes in the MUMPS communicator as there are periods. This does NOT mean that MUMPS actually decides to run on as many processes (e.g. small problems). You also have to make sure that you distribute the scenarios in the model correctly such that first ranks have contingency 1 and period 1,2,3... and not contingency 1,2,3,... and only one period.
The following changes:
First, SKIP the reduction of the entries that have a common pattern.
Gather the differences of only the MUMPS ranks, set the values of M_diff to 0! So only 0 valued entries will be created on rank 0. Now rank 0 has the global sparsity!
Do the reduction on all the process. Under the assumption above, there should be no mismatch.
This has been tested on bigger cases on Theta. I will keep on comparing the PIPS versions.
First this avoids the integer overflow in the displs array of the gatherv by replacing them with send/recv pairs.
Second, the sparsity pattern is now assumed to be global on the MUMPS communicator. I think this is reasonable for the multiperiod and hopefully in more general terms. So for multiperiod you must have at least as many MUMPS processes in the MUMPS communicator as there are periods. This does NOT mean that MUMPS actually decides to run on as many processes (e.g. small problems). You also have to make sure that you distribute the scenarios in the model correctly such that first ranks have contingency 1 and period 1,2,3... and not contingency 1,2,3,... and only one period.
The following changes:
This has been tested on bigger cases on Theta. I will keep on comparing the PIPS versions.