pytorch / rl

A modular, primitive-first, python-first PyTorch library for Reinforcement Learning.
https://pytorch.org/rl
MIT License
2.29k stars 305 forks source link

[BUG] MacOS Env cannot set number of intraop threads after parallel work has started #1778

Open skandermoalla opened 9 months ago

skandermoalla commented 9 months ago

To Reproduce

from torchrl.envs import EnvCreator, ParallelEnv
from torchrl.envs.libs.gym import GymEnv

def run(from_pixels):
    env = ParallelEnv(
        2, EnvCreator(lambda: GymEnv("CartPole-v1", from_pixels=from_pixels))
    )
    print(env.reset())
    env.close()

if __name__ == "__main__":
    # No Bug:
    # run(from_pixels=False)

    # Bug:
    # On macOS Apple Silicon the following:
    # outputs: [W ParallelNative.cpp:230] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
    run(from_pixels=True)

Yields

torchrl ❯ python issue_cant_set_num_threads.py 
/Users/moalla/mambaforge/envs/torchrl/lib/python3.10/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.num_envs to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.num_envs` for environment variables or `env.get_wrapper_attr('num_envs')` that will search the reminding wrappers.
  logger.warn(
/Users/moalla/mambaforge/envs/torchrl/lib/python3.10/site-packages/gymnasium/core.py:311: UserWarning: WARN: env.reward_space to get variables from other wrappers is deprecated and will be removed in v1.0, to get this variable you can do `env.unwrapped.reward_space` for environment variables or `env.get_wrapper_attr('reward_space')` that will search the reminding wrappers.
  logger.warn(
/Users/moalla/projects/open-source/torchrl/rl/torchrl/envs/batched_envs.py:765: UserWarning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1702400234613/work/aten/src/ATen/ParallelNative.cpp:230.)
  torch.set_num_threads(self.num_threads)
TensorDict(
    fields={
        done: Tensor(shape=torch.Size([2, 1]), device=cpu, dtype=torch.bool, is_shared=False),
        pixels: Tensor(shape=torch.Size([2, 400, 600, 3]), device=cpu, dtype=torch.uint8, is_shared=False),
        terminated: Tensor(shape=torch.Size([2, 1]), device=cpu, dtype=torch.bool, is_shared=False),
        truncated: Tensor(shape=torch.Size([2, 1]), device=cpu, dtype=torch.bool, is_shared=False)},
    batch_size=torch.Size([2]),
    device=cpu,
    is_shared=False)

System info

Did only test on MacOS for now.


torchrl ❯ mamba list | grep torch             
# packages in environment at /Users/moalla/mambaforge/envs/torchrl:
pytorch                   2.1.2                  py3.10_0    pytorch
torchrl                   0.2.1+1874e9a             dev_0    <develop>
torchvision               0.16.2                py310_cpu    pytorch

`
vmoens commented 9 months ago

Unfortunately I can't reproduce this with torch==2.1.2 or the nightlies on OsX...