Kaiseem / DAR-UNet

[JBHI2022] A novel 3D unsupervised domain adaptation framework for cross-modality medical image segmentation
Apache License 2.0
40 stars 5 forks source link

使用volumentations库进行处理后出现的问题 #22

Open aaahuaaa opened 4 months ago

aaahuaaa commented 4 months ago

前辈你好,我对你们使用的对数据的预处理方式很感兴趣,并希望自己尝试一下。

在3d的dataloader中你们使用了volumentations来进行aug,然后直接用已经i2i_translated的图像进行分割网络的训练,就不会有问题。

但我所使用的框架中需要先使用dataloader进行数据的读取和预处理,再进行源域图像向目标域图像的translate,之后再统一训练分割网络, 在我所使用的框架的训练准备过程中,先是使用了dataloader(我们几乎直接使用了你的dataloader3d.py,数据集来源也是你的文件来源,仅是简单地将.npy转换成了.nii,以方便使用SimpleITK进行读取)

    db_train_t = SEGDataset(root=args.root_path_t, n_class=args.num_classes)
    db_train_s = SEGDataset(root=args.root_path_s, n_class=args.num_classes)
    trainloader_t = DataLoader(db_train_t, batch_size=batch_size_half, shuffle=True,
                               num_workers=0, pin_memory=True, worker_init_fn=worker_init_fn)
    trainloader_s = DataLoader(db_train_s, batch_size=batch_size_half, shuffle=True,
                               num_workers=0, pin_memory=True, worker_init_fn=worker_init_fn)

再在开始训练时使用

    for epoch_num in iterator:
        # 使用enumerate对trainloader_s和trainloader_t进行迭代,获取每个batch的训练数据。
        for i_batch, sampled_batch in enumerate(zip(trainloader_s, trainloader_t)):

此时出现错误

Traceback (most recent call last):
  File "/public/ljy/code/3D_data.py", line 298, in <module>
    train(args, snapshot_path, exp_path)
  File "/public/ljy/code/3D_data.py", line 149, in train
    for i_batch, sampled_batch in enumerate(zip(trainloader_s, trainloader_t)):
  File "/home/user0/anaconda3/envs/LJY/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/home/user0/anaconda3/envs/LJY/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/home/user0/anaconda3/envs/LJY/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/user0/anaconda3/envs/LJY/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/public/ljy/code/dataloader_3D_data.py", line 141, in __getitem__
    seg_augmented = self.aug(image=image, mask=mask)
  File "/home/user0/anaconda3/envs/LJY/lib/python3.9/site-packages/volumentations/core/composition.py", line 60, in __call__
    data = tr(force_apply, self.targets, **data)
  File "/home/user0/anaconda3/envs/LJY/lib/python3.9/site-packages/volumentations/core/transforms_interface.py", line 117, in __call__
    data[k] = self.apply(v, **params)
  File "/home/user0/anaconda3/envs/LJY/lib/python3.9/site-packages/volumentations/augmentations/transforms.py", line 131, in apply
    return F.rescale_warp(img, scale, interpolation=self.interpolation)
  File "/home/user0/anaconda3/envs/LJY/lib/python3.9/site-packages/volumentations/augmentations/functional.py", line 441, in rescale_warp
    return map_coordinates(img, coords, order=interpolation, mode=border_mode, cval=value)
NameError: name 'border_mode' is not defined

出现这个错误的原因我怀疑是否是将aug函数写在init中导致改变了数据的定义?还是说是对处理后的数据进行组装时出了问题? 这个问题困扰了我两天了,我实在发现不到问题,来向前辈请教