We can notice that on node x3006c0s19b0n0, only GPU 1 and 3 are used, and they are used twice. Similar thing happens on the other node. However what we actually want to do is to use all GPUs on both nodes, and one GPU per process without overlapping.
I found that this issue is mainly because mpiexec on Polaris uses different logic to map PMI_RANK with PMI_LOCAL_RANK and node index, while rct assign GPU id (CUDA_VISIBLE_DEVICE) based on some rank index. Actually if I add a --ppn flag for mpiexec, then this issue is solved, and we can look at the difference between the logic of how mpiexec maps PMI_RANK to PMI_LOCAL_RANK and node index (just ignore gpu_id and focus on PMI_RANK!):
As we can see, with --ppn flag, PMI_SIZE will be assigned in a "greedy way", assigning consecutive PMI_RANK index for all ranks on the same node. However, without --ppn flag, PMI_SIZE will be assigned in a "round-robin way", and assign consecutive PMI_RANK index for all ranks with the same local rank index
I have the following test code in python, which prints the GPU id for each rank on each node:
and I use the following entk script to launch the job with 8 processes and 2 nodes:
and I got the following results:
We can notice that on node x3006c0s19b0n0, only GPU 1 and 3 are used, and they are used twice. Similar thing happens on the other node. However what we actually want to do is to use all GPUs on both nodes, and one GPU per process without overlapping.
I found that this issue is mainly because mpiexec on Polaris uses different logic to map PMI_RANK with PMI_LOCAL_RANK and node index, while rct assign GPU id (CUDA_VISIBLE_DEVICE) based on some rank index. Actually if I add a --ppn flag for mpiexec, then this issue is solved, and we can look at the difference between the logic of how mpiexec maps PMI_RANK to PMI_LOCAL_RANK and node index (just ignore gpu_id and focus on PMI_RANK!):
As we can see, with --ppn flag, PMI_SIZE will be assigned in a "greedy way", assigning consecutive PMI_RANK index for all ranks on the same node. However, without --ppn flag, PMI_SIZE will be assigned in a "round-robin way", and assign consecutive PMI_RANK index for all ranks with the same local rank index