Open mauriliogenovese opened 8 months ago
Attention: Patch coverage is 69.69697%
with 10 lines
in your changes are missing coverage. Please review.
Project coverage is 63.45%. Comparing base (
a17de8e
) to head (a642430
).
Files | Patch % | Lines |
---|---|---|
nipype/pipeline/plugins/multiproc.py | 67.74% | 6 Missing and 4 partials :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Just to check my understanding: in this model, a GPU-enabled job gets exclusive access to one full GPU, so the GPU queue is simply the number of available GPUs and the number of GPU-enabled jobs? There's no notion of a job acquiring multiple GPUs or partial GPUs?
From some quick searching, it's at least possible (though I don't know how common) to write programs that utilize multiple GPUs, so I think we should allow nodes to be tagged with multiple GPU threads.
If the CPU usage of a process is negligible, I think it would be reasonable to say:
myproc = pe.Node(ProcessInterface(), n_threads=0, n_gpus=2)
In the current implementation the user specifies how many n_gpu_procs the plugin should manage and the plugin will reserve those "slots" based on the node.n_threads property. If you think it's useful we can allow the user to specify different values for "gpu_procs" and "cpu_procs" for each node. What should be the behaviour if the user does not specify the n_gpus property? n_gpus=n_threads?
I wrote a simpler implementation of this old pull request to handle a queue of threads to be executed on GPU. The user can specify the maximum number of parallel threads with the plugin option n_gpu_procs The multiprocplugin will raise exception if a node require more threads than allowed in a similar way as classic CPU threads. Note that in this implementation any GPU node will also allocate a CPU slot (is that necessary? We can change that behavior ). Moreover the plugin doesn't check that the system actually has a cuda capable GPU (we can add such check if you think we need it)