Open fmahebert opened 6 months ago
Hi Francois, this is intentional behaviour (I added it a while ago). It allows the serial partitioner to be used for problems where every MPI task has a copy of the grid data. If you would like some other kind of single processor partitioner it would be easy enough to add one?
@twsearle That's fair enough. But for my understanding, can you explain a bit more the intention and behavior of the partition
config option? What does it control if not the task that will own the points?
@twsearle That's fair enough. But for my understanding, can you explain a bit more the intention and behavior of the
partition
config option? What does it control if not the task that will own the points?
Sorry I am not sure about the partition config option, sounds like something I missed when I made my change? Anyway, I just dropped in to make sure the possibility of running a functionspace duplicated in this way is maintained - its a feature not a bug from my point of view - although I don't mind how its implemented.
What happened?
When attempting to set up a FunctionSpace whose grid points are all on one particular MPI task, I find the serial partitioner does not act the way I expect it to, and the full grid appears to be created on every rank.
See snippets and outputs below for code specifics.
Is this reflective of user error in setting up the serial partitioner, or is there a bug?
What are the steps to reproduce the bug?
When I run this code on 6 MPI tasks...
I get the output...
Whereas with this code...
I get the expected all-on-rank-0 distribution...
Version
0.36
Platform (OS and architecture)
Linux x86_64
Relevant log output
No response
Accompanying data
No response
Organisation
JCSDA