Closed jmakov closed 2 years ago
Hi, I'm a bot from the Ray team :)
To help human contributors to focus on more relevant issues, I will automatically add the stale label to issues that have had no activity for more than 4 months.
If there is no further activity in the 14 days, the issue will be closed!
You can always ask for help on our discussion forum or Ray's public slack channel.
Hi again! The issue will be closed because there has been no more activity in the 14 days since the last message.
Please feel free to reopen or open a new issue if you'd still like it to be addressed.
Again, you can always ask for help on our discussion forum or Ray's public slack channel.
Thanks again for opening the issue!
What happened + What you expected to happen
A library is using all the cores on a node (multithreading). The idea is to distribute the work by using
ray.remote(num_cpus=_max_cpus_on_a_node_)
. Running the code without ray, all cores are used. Running withray
even the driver node uses only 1 core/thread. Do env vars need to be set somewhere else than before worker start command?This works (uses all cores) but warnings are reported (since we're using an empty string instead of an int). Is there a better way?
Versions / Dependencies
env.yaml
:Reproduction script
ray_cluster.yaml
:Issue Severity
Medium