opendilab / DI-drive

Decision Intelligence Platform for Autonomous Driving simulation.
https://opendilab.github.io/DI-drive/
Apache License 2.0
570 stars 57 forks source link

No such file or directory: '/home/DI-drive/didrive/lib/python3.8/site-packages/core/data/benchmark/corl2017/099/full_Town01.txt' #32

Open SExpert12 opened 3 months ago

SExpert12 commented 3 months ago

Hi, I stuck with this error when I run the training file for RL training like this:

python3 simple_rl_train.py -p ddpg

/home/DI-drive/didrive/lib/python3.8/site-packages/treevalue/tree/integration/torch.py:21: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. register_for_torch(TreeValue) /home/DI-drive/didrive/lib/python3.8/site-packages/treevalue/tree/integration/torch.py:22: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. register_for_torch(FastTreeValue) [07-04 10:02:39] WARNING If you want to use numba to default_helper.py:450 speed up segment tree, please install numba first /home/DI-drive/didrive/lib/python3.8/site-packages/gym/envs/registration.py:440: UserWarning: WARN: The registry.env_specs property along with EnvSpecTree is deprecated. Please use registry directly as a dictionary instead. logger.warn( [ENV] Register environments: ['SimpleCarla-v1', 'ScenarioCarla-v1']. /home/DI-drive/didrive/lib/python3.8/site-packages/gym/core.py:329: DeprecationWarning: WARN: Initializing wrapper in old step API which returns one bool instead of two. It is recommended to set new_step_api=True to use new step API. This will be the default behaviour in future. deprecation( Traceback (most recent call last): File "simple_rl_train.py", line 243, in main(args) File "simple_rl_train.py", line 153, in main collector_env = SyncSubprocessEnvManager( File "/home/DI-drive/didrive/lib/python3.8/site-packages/ding/envs/env_manager/subprocess_env_manager.py", line 79, in init super().init(env_fn, cfg) File "/home/DI-drive/didrive/lib/python3.8/site-packages/ding/envs/env_manager/base_env_manager.py", line 135, in init self._env_ref = self._env_fn0 File "simple_rl_train.py", line 35, in wrapped_continuous_env return BenchmarkEnvWrapper(ContinuousEnvWrapper(env), wrapper_cfg) File "/home/DI-drive/didrive/lib/python3.8/site-packages/core/envs/drive_env_wrapper.py", line 153, in init pose_pairs = read_pose_txt(benchmark_dir, poses_txt) File "/home/DI-drive/didrive/lib/python3.8/site-packages/core/data/benchmark/benchmark_utils.py", line 32, in read_pose_txt pose_pairs = pairs_file.read_text().strip().split('\n') File "/usr/lib/python3.8/pathlib.py", line 1236, in read_text with self.open(mode='r', encoding=encoding, errors=errors) as f: File "/usr/lib/python3.8/pathlib.py", line 1222, in open return io.open(self, mode, buffering, encoding, errors, newline, File "/usr/lib/python3.8/pathlib.py", line 1078, in _opener return self._accessor.open(self, flags, mode) FileNotFoundError: [Errno 2] No such file or directory: '/home/DI-drive/didrive/lib/python3.8/site-packages/core/data/benchmark/corl2017/099/full_Town01.txt'

How to resolve this?

Also I want to know is this supports multi agent scenario?

PaParaZz1 commented 2 months ago

DI-drive doesn't support multi-agent scenario now.