metadriverse / metadrive

MetaDrive: Open-source driving simulator
https://metadriverse.github.io/metadrive/
Apache License 2.0
671 stars 100 forks source link

MetaDrive instance is broken when call the env.reset() #683

Open guanjiayi opened 3 months ago

guanjiayi commented 3 months ago

Hello. After training the model for a period, I conducted online testing by calling the environment. However, when I called env.reset() for the second time, it prompted that the current MetaDrive instance was corrupted and suggested calling env.close() before env.reset(). Nevertheless, even after calling env.close() before env.reset(), the problem still persisted.

The error message is as follows.

Traceback (most recent call last):
  File "train_eval.py", line 14, in <module>
    jaynes.run(thunk)
  File "/home/******/.conda/envs/diffuser_safe_v03/lib/python3.8/site-packages/jaynes/jaynes.py", line 280, in run
    return fn(*args, **kwargs)
  File "/home/******/.conda/envs/diffuser_safe_v03/lib/python3.8/site-packages/ml_logger/__init__.py", line 220, in thunk
    raise e
  File "/home/******/.conda/envs/diffuser_safe_v03/lib/python3.8/site-packages/ml_logger/__init__.py", line 203, in thunk
    results = fn(*(args or ARGS), **_KWARGS)
  File "/home/******/opt/diffusion/diffuser_safe_v03/scripts/train_eval.py", line 196, in main
    trainer.train(n_train_steps=Config.n_steps_per_epoch, wandb_flag=Config.wandb)
  File "/home/******/opt/diffusion/diffuser_safe_v03/diffuser/utils/training_eval.py", line 280, in train
    rewards_mean, rewards_std, costs_mean, costs_std, normalize_reward_mean, normalize_cost_mean, normalize_reward_std, normalize_cost_std = self.on_eval()
  File "/home/******/opt/diffusion/diffuser_safe_v03/diffuser/utils/training_eval.py", line 145, in on_eval
    obs_list = [env.reset()[0] for env in self.env_list]
  File "/home/******/opt/diffusion/diffuser_safe_v03/diffuser/utils/training_eval.py", line 145, in <listcomp>
    obs_list = [env.reset()[0] for env in self.env_list]
  File "/home/******/.conda/envs/diffuser_safe_v03/lib/python3.8/site-packages/gym/wrappers/time_limit.py", line 68, in reset
    return self.env.reset(**kwargs)
  File "/home/******/.conda/envs/diffuser_safe_v03/lib/python3.8/site-packages/gym/wrappers/order_enforcing.py", line 42, in reset
    return self.env.reset(**kwargs)
  File "/home/******/.conda/envs/diffuser_safe_v03/lib/python3.8/site-packages/gym/wrappers/env_checker.py", line 45, in reset
    return env_reset_passive_checker(self.env, **kwargs)
  File "/home/******/.conda/envs/diffuser_safe_v03/lib/python3.8/site-packages/gym/utils/passive_env_checker.py", line 192, in env_reset_passive_checker
    result = env.reset(**kwargs)
  File "/home/******/opt/diffusion/diffuser_safe_v03/DSRL/dsrl/offline_metadrive/gym_envs.py", line 45, in reset
    return super().reset(force_seed=seed), {}
  File "/home/******/.conda/envs/diffuser_safe_v03/lib/python3.8/site-packages/metadrive/envs/safe_metadrive_env.py", line 34, in reset
    return super(SafeMetaDriveEnv, self).reset(*args, **kwargs)
  File "/home/******/.conda/envs/diffuser_safe_v03/lib/python3.8/site-packages/metadrive/envs/base_env.py", line 336, in reset
    raise ValueError(
ValueError: Current MetaDrive instance is broken. Please make sure there is only one active MetaDrive environment exists in one process. You can try to call env.close() and then call env.reset() to rescue this environment. However, a better and safer solution is to check the singleton of MetaDrive and restart your program.

Below is our code snippet:

 for i in range(len(self.env_list)):
                self.env_list[i].close()
 obs_list = [env.reset()[0] for env in self.env_list]

Note the len(self.env_list) ==1

pengzhenghao commented 3 months ago

That's expected as one process can only have one MetaDrive instance.

So we suggest:

env1 = MetaDriveEnv(...)
...
env1.close()
env2 = MetaDriveEnv(...)
...
env2.close()
pengzhenghao commented 3 months ago

In the case where you have a training environment, and you want to instantiate a new eval env, you should close the training env first before creating the eval env.

guanjiayi commented 3 months ago

Hi Dr. Peng. Thank you for the feedback. I was quite busy over the past couple of days and forgot to respond. The issue has been resolved.