Galaxies99 / TransCG

[RAL 2022 & ICRA 2023] TransCG: A Large-Scale Real-World Dataset for Transparent Object Depth Completion and A Grasping Baseline
https://graspnet.net/transcg
Other
82 stars 12 forks source link

cleargrasp数据集训练到30%进度时报错 #10

Closed ZhiyangZhou24 closed 1 year ago

ZhiyangZhou24 commented 1 year ago

我在尝试用Cleargrasp数据集训练DFNet的时候出现了数据集加载的错误,报错如下: [main][INFO] Building models ... (train.py:41) [main][INFO] Building dataloaders ... (train.py:54) [utils.builder][INFO] Load cleargrasp-syn dataset as training set with 45454 samples. (builder.py:218) [utils.builder][INFO] Load cleargrasp-real dataset as testing set with 286 samples. (builder.py:218) [main][INFO] Checking checkpoints ... (train.py:58) [main][INFO] Building optimizer and learning rate schedulers ... (train.py:71) [main][INFO] --> Epoch 1/40 (train.py:153) [main][INFO] Start training process in epoch 1. (train.py:84) [main][INFO] Learning rate: [0.001]. (train.py:86) Epoch 1, loss: 0.00160300, smooth loss: 0.81160860: 30%|████████▊ | 1723/5681 [18:22<42:12, 1.56it/s] Traceback (most recent call last): File "train.py", line 174, in train(start_epoch = start_epoch) File "train.py", line 154, in train train_one_epoch(epoch) File "train.py", line 90, in train_one_epoch for data_dict in pbar: File "/home/lab/anaconda3/envs/transcg/lib/python3.7/site-packages/tqdm/std.py", line 1195, in iter for obj in iterable: File "/home/lab/anaconda3/envs/transcg/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 681, in next data = self._next_data() File "/home/lab/anaconda3/envs/transcg/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1356, in _next_data return self._process_data(data) File "/home/lab/anaconda3/envs/transcg/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data data.reraise() File "/home/lab/anaconda3/envs/transcg/lib/python3.7/site-packages/torch/_utils.py", line 461, in reraise raise exception RuntimeError: Caught RuntimeError in DataLoader worker process 11. Original Traceback (most recent call last): File "/home/lab/anaconda3/envs/transcg/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop data = fetcher.fetch(index) File "/home/lab/anaconda3/envs/transcg/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "/home/lab/anaconda3/envs/transcg/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 160, in default_collate return elem_type({key: default_collate([d[key] for d in batch]) for key in elem}) File "/home/lab/anaconda3/envs/transcg/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 160, in return elem_type({key: default_collate([d[key] for d in batch]) for key in elem}) File "/home/lab/anaconda3/envs/transcg/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 140, in defaultcollate out = elem.new(storage).resize(len(batch), *list(elem.size())) RuntimeError: Trying to resize storage that is not resizable

下面罗列的是我的训练参数,我参考了之前的一个issue,将"shuffle":这个参数改成了False,其他的参数均为默认值,训练得以正常进行,但是训练过程中会报上面提到的错误,而且每次报错的迭代都是在同一个位置(30%的位置),请问这个问题如何解决呢? "model": "type": "DFNet" "params": "in_channels": 4 "hidden_channels": 64 "L": 5 "k": 12

"optimizer": "type": "AdamW" "params": "lr": 0.001

"lr_scheduler": "type": "MultiStepLR" "params": "milestones": [5, 15, 25, 35] "gamma": 0.2

"dataset": "train": "type": "cleargrasp-syn" "data_dir": "/media/lab/d/ZZY/datasets/cleargrasp" "image_size": !!python/tuple [320, 240] "use_augmentation": True "rgb_augmentation_probability": 0.8 "depth_min": 0.3 "depth_max": 1.5 "depth_norm": 1.0 "with_original": True "test": "type": "cleargrasp-real" "data_dir": "/media/lab/d/ZZY/datasets/cleargrasp" "image_size": !!python/tuple [320, 240] "use_augmentation": False "depth_min": 0.3 "depth_max": 1.5 "depth_norm": 1.0 "with_original": True

"dataloader": "num_workers": 16 "shuffle": False
"drop_last": True

"trainer": "batch_size": 8 "test_batch_size": 1 "multigpu": False "max_epoch": 40
"criterion": "type": "custom_masked_mse_loss" "epsilon": 0.00000001 "combined_smooth": True "combined_beta": 0.001

"metrics": "types": ["MSE", "MaskedMSE", "RMSE", "MaskedRMSE", "REL", "MaskedREL", "MAE", "MaskedMAE", "Threshold@1.05", "MaskedThreshold@1.05", "Threshold@1.10", "MaskedThreshold@1.10", "Threshold@1.25", "MaskedThreshold@1.25"] "epsilon": 0.00000001 "depth_scale": 1.0

"stats": "stats_dir": "stats" "stats_exper": "train-cgsy-val-cgre"

Galaxies99 commented 1 year ago

不好意思这个问题我们之前没有遇到过

可以试试 设定 with_original 为 False ,然后应该可以设置 shuffle 为 True,看看还会不会有这样的问题

ZhiyangZhou24 commented 1 year ago

非常感谢你的回复,我已经解决这个问题了~

ZhiyangZhou24 commented 1 year ago

你好,想请教一个问题,release的DFNet的代码,我尝试在cleargrasp数据集中训练DFNet,我想加上smooth_loss损失项,但是我开启了with_original之后导致训练报错,将with_original关闭(设置为False)之后又能正常训练,但是不能添加smooth损失。如果我想在cleargrasp数据集中进行添加光滑项损失的训练的话,应该怎么修改呢?谢谢

Galaxies99 commented 1 year ago

主要是因为计算loss的时候(utils/criterion.py)中要用到 original 的图片来计算 surface normal,所以需要 with_original 的选项。修改方法可能有两种:

  1. 修改数据集中的原始图片到同一尺寸;
  2. 修改loss的时候,不使用原图计算surface normal,使用downsample后的图计算surface normal,不过要注意的是这时候 camera intrinsics也需要根据尺寸变化进行相应调整。
thk8181 commented 1 year ago

再次打扰,根据@[Galaxies99]提供的建议,我们进行了尝试,但是还是没有解决,请问您是如何解决的呢?万分感谢!