Closed mattbement closed 9 months ago
It's an interesting observation, for sure. I agree that a general case with different communicators makes sense. It would mean that we would simply use this communicators to create a union in DTK, as you do that outside right now.
We used to have this comm union capability in an older DTK but the user always gave us a third communicator for the global execution scope of the operator (which was the union). I guess if we just duplicate MPI_COMM_WORLD
we could create the union from that and avoid hangs?
DataTransferKit
is going to be archived.
OK - not really an "issue". More of a question. What's the rational behind supplying a single communicator to the the nearest neighbor and least squares operator? It seems like it would be good to provide the communicator that source_points lives on and the communicator that target_points lives on separately. Let me explain why. For the parallel in time stuff, I've got a bunch of "simulations". Each simulation has it's own communicator. I need to get results from simulation i to simulation i+1. So say simulation 0 has source_points, temperature defined on those points and simulation 1 has target_points. As it stands, I create a communicator which is basically the union of these two communicators, and do the mapping. No problem. But then for simulation 1 needing to map to simulation 2, I create another communicator. However, simulation 1 already has target points defined, and I think this messes up the operator (it can't tell the difference between the target points that I want on simulation 2 from the target points on simulation 1, because they've been lumped into a single communicator.