There is a workaround for it, but whether it's solving the root issue is unknown. The workaround is to use the custom MPI data type mentioned in the Confluence document, regardless of if the number of elements to send in the MPI operation actually exceeds the limit that can be sent or not.
In the future, it may be desired to further explore the reason for this behaviour, and to potentially propose a more satisfactory fix.
See the following Diamond Confluence page for details https://confluence.diamond.ac.uk/x/swC5D.
There is a workaround for it, but whether it's solving the root issue is unknown. The workaround is to use the custom MPI data type mentioned in the Confluence document, regardless of if the number of elements to send in the MPI operation actually exceeds the limit that can be sent or not.
In the future, it may be desired to further explore the reason for this behaviour, and to potentially propose a more satisfactory fix.