the similar bug above occured on dataset/hvsmr.py and dataset/mmwhs.py
from batchgenerators.dataloading import MultiThreadedAugmenter
from batchgenerators.transforms import Compose, RndTransform
from batchgenerators.transforms import SpatialTransform, MirrorTransform
from batchgenerators.transforms import GammaTransform, ConvertSegToOnehotTransform
from batchgenerators.transforms import RandomCropTransform
should be corrected as
from batchgenerators.dataloading.multi_threaded_augmenter import MultiThreadedAugmenter
from batchgenerators.transforms.abstract_transforms import Compose, RndTransform
from batchgenerators.transforms.spatial_transforms import SpatialTransform, MirrorTransform
from batchgenerators.transforms.color_transforms import GammaTransform
from batchgenerators.transforms.utility_transforms import ConvertSegToOnehotTransform
from batchgenerators.transforms.crop_and_pad_transforms import RandomCropTransform
notice:
model = torch.nn.DataParallel(model, device_ids=args.multiple_device_id) in train_contrast.py reported error, which is due to parser.add_argument("--multiple_device_id", type=tuple, default=(0,1)) in myconfig.py
but in fact, when it comes to multiple GPUs, maybe we should name the specific GPUs we request, like
pkgs: batchgenerators 0.23
report bugs:
notice:
model = torch.nn.DataParallel(model, device_ids=args.multiple_device_id)
in train_contrast.py reported error, which is due toparser.add_argument("--multiple_device_id", type=tuple, default=(0,1))
in myconfig.py but in fact, when it comes to multiple GPUs, maybe we should name the specific GPUs we request, likeos.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "2, 3"
in train_contrast.py before the main function.