I encounter the following error when running train_local.sh in a scenario without obstacles:
Traceback (most recent call last):
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal
slot_callable(*args)
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts
complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing)
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts
new_obs, rewards, terminated, truncated, infos = e.step(actions)
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/gymnasium/core.py", line 408, in step
return self.env.step(action)
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/sample_factory/algo/utils/make_env.py", line 129, in step
obs, rew, terminated, truncated, info = self.env.step(action)
File "/home/saz/Desktop/qsrl_test/swarm_rl/env_wrappers/compatibility.py", line 44, in step
obs, reward, done, info = self.env.step(action)
File "/home/saz/Desktop/qsrl_test/swarm_rl/env_wrappers/reward_shaping.py", line 63, in step
obs, rewards, dones, infos = self.env.step(action)
File "/home/saz/Desktop/qsrl_test/gym_art/quadrotor_multi/quad_experience_replay.py", line 124, in step
obs = self.new_episode()
File "/home/saz/Desktop/qsrl_test/gym_art/quadrotor_multi/quad_experience_replay.py", line 184, in new_episode
self.curr_obst_density = replayed_env.obst_density
AttributeError: 'QuadrotorEnvMulti' object has no attribute 'obst_density'
[2023-12-16 05:59:10,259][206137] Unhandled exception 'QuadrotorEnvMulti' object has no attribute 'obst_density' in evt loop rollout_proc3_evt_loop
Process rollout_proc3:
Traceback (most recent call last):
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/signal_slot/signal_slot.py", line 511, in _target
self.event_loop.exec()
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/signal_slot/signal_slot.py", line 403, in exec
raise exc
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/signal_slot/signal_slot.py", line 399, in exec
while self._loop_iteration():
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/signal_slot/signal_slot.py", line 383, in _loop_iteration
self._process_signal(s)
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/signal_slot/signal_slot.py", line 358, in _process_signal
raise exc
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/signal_slot/signal_slot.py", line 355, in _process_signal
slot_callable(*args)
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/sample_factory/algo/sampling/rollout_worker.py", line 241, in advance_rollouts
complete_rollouts, episodic_stats = runner.advance_rollouts(policy_id, self.timing)
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/sample_factory/algo/sampling/non_batched_sampling.py", line 634, in advance_rollouts
new_obs, rewards, terminated, truncated, infos = e.step(actions)
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/gymnasium/core.py", line 408, in step
return self.env.step(action)
File "/home/saz/anaconda3/envs/swarm-rl/lib/python3.8/site-packages/sample_factory/algo/utils/make_env.py", line 129, in step
obs, rew, terminated, truncated, info = self.env.step(action)
File "/home/saz/Desktop/qsrl_test/swarm_rl/env_wrappers/compatibility.py", line 44, in step
obs, reward, done, info = self.env.step(action)
File "/home/saz/Desktop/qsrl_test/swarm_rl/env_wrappers/reward_shaping.py", line 63, in step
obs, rewards, dones, infos = self.env.step(action)
File "/home/saz/Desktop/qsrl_test/gym_art/quadrotor_multi/quad_experience_replay.py", line 124, in step
obs = self.new_episode()
File "/home/saz/Desktop/qsrl_test/gym_art/quadrotor_multi/quad_experience_replay.py", line 184, in new_episode
self.curr_obst_density = replayed_env.obst_density
AttributeError: 'QuadrotorEnvMulti' object has no attribute 'obst_density'
I can avoid the error by changing this line to
self.curr_obst_density = replayed_env.obst_density if hasattr(replayed_env, "obst_density") else self.curr_obst_density
Is it ok to assume that the variable self.curr_obst_density is not used, if I set --quads_use_obstacles=False in the shell script?
The error occurs (roughly) after 200M training steps
I encounter the following error when running train_local.sh in a scenario without obstacles:
I can avoid the error by changing this line to
self.curr_obst_density = replayed_env.obst_density if hasattr(replayed_env, "obst_density") else self.curr_obst_density
Is it ok to assume that the variable
self.curr_obst_density
is not used, if I set--quads_use_obstacles=False
in the shell script?The error occurs (roughly) after 200M training steps