Open yuntae96 opened 1 year ago
Hi there, you may also need to adjust the minibatch_size
parameter in the PPO config file when modifying num_envs
. The rl-games library requires that minibatch_size
be a multiple of num_envs * horizon_length
. Please see https://github.com/NVIDIA-Omniverse/OmniIsaacGymEnvs/blob/main/docs/troubleshoot.md#rl-training for more details.
Hi, i want to use "num_envs" argument. When i command like "PYTHON_PATH scripts/rlgames_train.py task=ShadowHand num_envs=512", i got error message below. How can i use "num_envs" argument? Thank you.