Helllo. I could not find in the documentation a variable that hold the number of GPUs used by the process,. The closest i could find is this -> task.accelerator.request from here.
It would be nice to have a variable called task.gpus like there is for the cpu task.cpus.
Usage scenario
At running time of the process it can be useful to know the number of CPUs,GPUs and memory requested. The first and last have their own task variable associated with an intuitive name.
This could be useful metrics to know for GPU intensive processes like ML applications.
A practical application could be in the case of ray.init()documentation, where an explicit hard limit can be set for resources.
New feature
Helllo. I could not find in the documentation a variable that hold the number of GPUs used by the process,. The closest i could find is this ->
task.accelerator.request
from here.It would be nice to have a variable called
task.gpus
like there is for the cputask.cpus
.Usage scenario
At running time of the process it can be useful to know the number of CPUs,GPUs and memory requested. The first and last have their own
task
variable associated with an intuitive name.This could be useful metrics to know for GPU intensive processes like ML applications. A practical application could be in the case of
ray.init()
documentation, where an explicit hard limit can be set for resources.