Open tornado20092008 opened 2 days ago
This was fixed by switching of shared_memory in subprocess_env_manager.py. However now I am getting this:
pygame 2.6.1 (SDL 2.28.4, Python 3.7.16)
Hello from the pygame community. https://www.pygame.org/contribute.html
[ENV] Register environments: ['SimpleCarla-v1', 'ScenarioCarla-v1'].
------ Run Carla on Port: 9000, GPU: 0 ------
[SIMULATOR] Not providing TM port, try finding free
[SIMULATOR] Using TM port: 56485
------ Run Carla on Port: 9004, GPU: 0 ------
[SIMULATOR] Not providing TM port, try finding free
[SIMULATOR] Using TM port: 42571
[11-14 11:13:40] INFO [RANK0]: DI-engine DRL Policy base_learner.py:338
DQNRLModel(
(_encoder): BEVSpeedConvEncoder(
(_relu): ReLU()
(_model): Sequential(
(0): Conv2d(5, 64, kernel_size=(3, 3), stride=(2, 2))
(1): ReLU()
(2): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2))
(3): ReLU()
(4): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2))
(5): ReLU()
(6): Flatten(start_dim=1, end_dim=-1)
)
(_mid): Linear(in_features=2304, out_features=256, bias=True)
)
(_head): DuelingHead(
(A): Sequential(
(0): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
)
(1): Sequential(
(0): Linear(in_features=512, out_features=21, bias=True)
)
)
(V): Sequential(
(0): Sequential(
(0): Linear(in_features=512, out_features=512, bias=True)
(1): ReLU()
)
(1): Sequential(
(0): Linear(in_features=512, out_features=1, bias=True)
)
)
)
)
pygame 2.6.1 (SDL 2.28.4, Python 3.7.16)
Hello from the pygame community. https://www.pygame.org/contribute.html
[ENV] Register environments: ['SimpleCarla-v1', 'ScenarioCarla-v1'].
------ Run Carla on Port: 9000, GPU: 0 ------
[SIMULATOR] Not providing TM port, try finding free
[SIMULATOR] Using TM port: 40265
------ Run Carla on Port: 9004, GPU: 0 ------
[SIMULATOR] Not providing TM port, try finding free
[SIMULATOR] Using TM port: 35481
**[11-14 11:13:59] ERROR Env 0 reset has exceeded max retries(5)** func.py:62
Traceback (most recent call last):
File "/home/a_mohame/anaconda3/envs/my_env/lib/python3.7/site-packages/ding/utils/system_helper.py", line 57, in run
self.ret = self._target(*self._args, **self._kwargs)
File "/home/a_mohame/anaconda3/envs/my_env/lib/python3.7/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 459, in _reset
self.close()
File "/home/a_mohame/anaconda3/envs/my_env/lib/python3.7/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 764, in close
p.send(['close', None, None])
File "/home/a_mohame/anaconda3/envs/my_env/lib/python3.7/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/a_mohame/anaconda3/envs/my_env/lib/python3.7/multiprocessing/connection.py", line 404, in _send_bytes
self._send(header + buf)
File "/home/a_mohame/anaconda3/envs/my_env/lib/python3.7/multiprocessing/connection.py", line 368, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "simple_rl_train_My.py", line 143, in <module>
main(args)
File "simple_rl_train_My.py", line 104, in main
cfg.policy.eval.evaluator, evaluate_env, policy.eval_mode, exp_name=cfg.exp_name
File "/home/a_mohame/DI-drive/core/eval/serial_evaluator.py", line 59, in __init__
super().__init__(cfg, env, policy, tb_logger=tb_logger, exp_name=exp_name, instance_name=instance_name)
File "/home/a_mohame/DI-drive/core/eval/base_evaluator.py", line 49, in __init__
self.env = env
File "/home/a_mohame/DI-drive/core/eval/serial_evaluator.py", line 76, in env
self._env_manager.launch()
File "/home/a_mohame/anaconda3/envs/my_env/lib/python3.7/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 351, in launch
self.reset(reset_param)
File "/home/a_mohame/anaconda3/envs/my_env/lib/python3.7/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 408, in reset
t.join()
File "/home/a_mohame/anaconda3/envs/my_env/lib/python3.7/site-packages/ding/utils/system_helper.py", line 64, in join
raise RuntimeError('Exception in thread({})'.format(id(self))) from self.exc
RuntimeError: Exception in thread(140351789910736)
Description:
When running simple_rl_train.py, I encounter an error where the environment (Carla) seems to attempt a reset multiple times, causing an AttributeError due to obs_space being None. This happens after the environment appears to initialize successfully.
Error Traceback:
Setup Details:
DI-engine: 0.4 DI-drive: 0.3.4 Gym: 0.20.0 Carla: 0.9.10 Python: 3.7.16
Steps to Reproduce:
Run the command python simple_rl_train.py. The environment initializes and registers, and the simulation begins is reset. After resetting, the environment unexpectedly attempts another reset. The program throws an AttributeError in subprocess_env_manager.py due to obs_space being None.