Closed mb9041 closed 2 months ago
Hello, do you see this at the beginning of the executing of the script or sometime in the middle of execution?
If you see this in the very beginning, and this prints out only once, this means that all the environments are being reset together and it's probably okay and just a part of initialization of the environments
If you do see this again and again or sometime in the middle of execution, something is probably wrong with how the robot is spawning and that the robot is spawning in a collision state and that is causing the robot to reset at the first timestep.
Hello @mb9041, I just got to try it out again on my end. It happens with the vanilla setup and happens only once in my case at the beginning. You can safely ignore this error.
This was placed as a sanity check that would alert the users if certain environment configurations cause the robots to spawn in, or close to collision states leading to suboptimal samples for the RL algorithm. In the case mentioned above, it's printed as all environments are reset at the beginning of training time either by the RL algorithm or the task code, and naturally, an alert is raised that the environments are being reset too close to the start of the episode.
This is perfectly fine to happen at this stage. If it happens later, then the users should check their environment configurations to prevent spawning locations near collision states.
@mihirk284 thank you for confirming this! I only see this error at the beginning of execution.
Great! I will mark this as resolved then
Hello!
I just installed Aerial Gym and I have been going through the RL training examples in the documentation.
I ran the suggested command for training the navigation policy:
python3 runner.py --file=./ppo_aerial_quad_navigation.yaml --num_envs=512 --headless=True
This command begins to train but I get a CRITICAL warning about crashing too soon. Is this a problem? Do I need to change any of the parameters to avoid this error?
Error:
Note: I changed the --num_envs from 1024 to 512 because I ran out of CUDA space.
Thank you in advance for your help!