cure-lab / MagicDrive

[ICLR24] Official implementation of the paper “MagicDrive: Street View Generation with Diverse 3D Geometry Control”
https://gaoruiyuan.com/magicdrive/
GNU Affero General Public License v3.0
664 stars 40 forks source link

Errors in distributed training #91

Closed QChencq closed 1 month ago

QChencq commented 1 month ago

When I was training Magicdrive in a cluster, I encountered the following error:

Traceback (most recent call last): File "tools/train.py", line 131, in main runner.run() File "./magicdrive/runner/base_runner_noise.py", line 365, in run for step, batch in enumerate(self.train_dataloader): File "/opt/conda/envs/magic/lib/python3.7/site-packages/accelerate/data_loader.py", line 377, in iter current_batch = next(dataloader_iter) File "/opt/conda/envs/magic/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 521, in next data = self._next_data() File "/opt/conda/envs/magic/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/opt/conda/envs/magic/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/opt/conda/envs/magic/lib/python3.7/site-packages/torch/_utils.py", line 434, in reraise raise exception KeyError: Caught KeyError in DataLoader worker process 0. Original Traceback (most recent call last): File "/opt/conda/envs/magic/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/opt/conda/envs/magic/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "./magicdrive/dataset/utils.py", line 467, in collate_fn example_ti, template=template, tokenizer=tokenizer, **kwargs) File "./magicdrive/dataset/utils.py", line 388, in collate_fn_single ], dim=-1) for example in examples], dim=0) File "./magicdrive/dataset/utils.py", line 388, in ], dim=-1) for example in examples], dim=0) KeyError: 'camera_intrinsics'

After debugging, I found that the data loss might be caused by distributed training. Have you ever encountered the above problem?

QChencq commented 1 month ago

Or could you give me some help? Thanks.

flymin commented 1 month ago

After debugging, I found that the data loss might be caused by distributed training.

I do not think so. Please double check your data configuration. There shouldn’t be such an issue if using the provided processed meta data.

QChencq commented 1 month ago

Thank you! it is indeed a problem with my data configuration.