Project-MONAI / MONAI

AI Toolkit for Healthcare Imaging
https://monai.io/
Apache License 2.0
5.8k stars 1.07k forks source link

RandCropByPosNegLabeld AttributeError: 'list' object has no attribute 'keys' #1486

Closed OeslleLucena closed 3 years ago

OeslleLucena commented 3 years ago

Running a minimal example for a patch-based segmentation based on the spleen Lightning segmentation tutorial from here. However, the returned training batch is a list instead of a dict. The validation bit works with no error. Here is a print of the error:

| Name      | Type              | Params
------------------------------------------------
0 | net       | Generic_UNet      | 13.9 M
1 | criterion | BCEWithLogitsLoss | 0     
------------------------------------------------
13.9 M    Trainable params
0         Non-trainable params
13.9 M    Total params
/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:49: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  warnings.warn(*args, **kwargs)
Validation sanity check:   0%|          | 0/2 [00:00<?, ?it/s]/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/monai/data/utils.py:300: UserWarning: Modifying image pixdim from [1.25 1.25 1.25 1.  ] to [  1.25         1.25         1.25       170.76592166]
  warnings.warn(f"Modifying image pixdim from {pixdim} to {norm}")
/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/monai/data/utils.py:300: UserWarning: Modifying image pixdim from [1.25 1.25 1.25 1.  ] to [  1.25         1.25         1.25       170.76592166]
  warnings.warn(f"Modifying image pixdim from {pixdim} to {norm}")
outputs torch.Size([1, 73, 145, 174, 145])
Validation sanity check:  50%|█████     | 1/2 [00:12<00:12, 12.25s/it]outputs torch.Size([1, 73, 145, 174, 145])
Epoch 0:   0%|          | 0/84 [00:00<?, ?it/s] /home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/utilities/distributed.py:49: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  warnings.warn(*args, **kwargs)
/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/monai/data/utils.py:300: UserWarning: Modifying image pixdim from [1.25 1.25 1.25 1.  ] to [  1.25         1.25         1.25       170.76592166]
  warnings.warn(f"Modifying image pixdim from {pixdim} to {norm}")
/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/monai/data/utils.py:300: UserWarning: Modifying image pixdim from [1.25 1.25 1.25 1.  ] to [  1.25         1.25         1.25       170.76592166]
  warnings.warn(f"Modifying image pixdim from {pixdim} to {norm}")
Epoch 0:   0%|          | 0/84 [00:04<?, ?it/s]
Traceback (most recent call last):
  File "/home/ol18/Codes/TractUncertainty/segmentation_sh_torchIO_PIL MONAI.py", line 378, in <module>
    trainer.fit(model) #tune(model)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 509, in fit
    results = self.accelerator_backend.train()
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 57, in train
    return self.train_or_test()
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 74, in train_or_test
    results = self.trainer.train()
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 560, in train
    self.train_loop.run_training_epoch()
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 534, in run_training_epoch
    batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 692, in run_training_batch
    self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 475, in optimizer_step
    using_lbfgs=is_lbfgs,
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/core/lightning.py", line 1264, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/core/optimizer.py", line 286, in step
    self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/core/optimizer.py", line 144, in __optimizer_step
    optimizer.step(closure=closure, *args, **kwargs)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
    return wrapped(*args, **kwargs)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
    return func(*args, **kwargs)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/torch/optim/rmsprop.py", line 66, in step
    loss = closure()
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 687, in train_step_and_backward_closure
    self.trainer.hiddens
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 780, in training_step_and_backward
    result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/trainer/training_loop.py", line 301, in training_step
    training_step_output = self.trainer.accelerator_backend.training_step(args)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 71, in training_step
    return self._step(self.trainer.model.training_step, args)
  File "/home/ol18/miniconda3/envs/tract_uncertainty/lib/python3.6/site-packages/pytorch_lightning/accelerators/gpu_accelerator.py", line 66, in _step
    output = model_step(*args)`` 
  File "/home/ol18/Codes/TractUncertainty/segmentation_sh_torchIO_PIL MONAI.py", line 224, in training_step
    print('train',batch.keys())
AttributeError: 'list' object has no attribute 'keys'

Below is shown how the transforms, training/validation steps, and loaders are coded.

    def prepare_data(self):
        data_path = Path(self.hparams.data_path)
        csv_path = Path(self.hparams.csv_path)

        if self.hparams.norm:
            input_data = 'input_norm'
        else:
            input_data = 'input'

        subjects_folders = [data_path / input_data,
                            data_path / 'label',
                            data_path / 'mask']
        subjects_prefixes = ['dwish', 'tractsmask', 'dwimask']

        fold_file = csv_path / f'fold{self.hparams.fold}.csv'

        df = pd.read_csv(fold_file, names=['SUBJECTS', 'SET'])
        subjects_names = {x: list(df[df.SET == x].SUBJECTS)
                          for x in [TRAINING, VALIDATION, INFERENCE]
                          }

        subjects_list = {x: monai_data_dicts(subjects_folders,
                                             subjects_prefixes,
                                             subjects_names[x])
                         for x in [TRAINING, VALIDATION, INFERENCE]
                         }

        # set deterministic training for reproducibility
        set_determinism(seed=0)

        # define the data transforms
        train_transforms = Compose(
            [
                LoadImaged(keys=["image", "label"]),
                Orientationd(keys=["image", "label"], axcodes="RAS"),
                # randomly crop out patch samples from big image based on pos / neg ratio
                # the image centers of negative samples must be in valid image area
                RandCropByPosNegLabeld(
                    keys=["image", "label"],
                    label_key="label",
                    spatial_size=(32, 32, 32),
                    num_samples=4,
                ),
                # DataStatsd(keys=['image', 'label'], data_value=False),
                ToTensord(keys=["image", "label"])
            ]
        )
        val_transforms = Compose(
            [
                LoadImaged(keys=["image", "label"]),
                Orientationd(keys=["image", "label"], axcodes="RAS"),
                ToTensord(keys=["image", "label"])
            ]
        )

        # we use cached datasets - these are 10x faster than regular datasets
        self.train_ds = Dataset(
            data=subjects_list[TRAINING], transform=train_transforms,
        )
        self.val_ds = Dataset(
            data=subjects_list[VALIDATION], transform=val_transforms,
        )

     def training_step(self, batch, batch_idx):
        print('train',batch.keys())
        inputs = batch['image']
        labels = batch['label']
        outputs = self(inputs)

        loss = self.criterion(outputs, labels)
        dice = dice_score(torch.sigmoid(outputs), labels)

        # Calling self.log will surface up scalars for you in TensorBoard
        self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True)
        self.log('train_dice', dice, on_step=True, on_epoch=True, prog_bar=True, logger=True)
        return loss

    def validation_step(self, batch, batch_idx):
        inputs = batch['image']
        labels = batch['label']
        roi_size = (32, 32, 32)
        sw_batch_size = 4
        outputs = sliding_window_inference(inputs, roi_size, sw_batch_size, self.forward)
        print('outputs', outputs.shape)
        loss = self.criterion(outputs, labels)
        dice = dice_score(torch.sigmoid(outputs), labels)

        self.log_image_tensorboard(labels, torch.sigmoid(outputs) >= 0.5, 'val_images')

        # Calling self.log will surface up scalars for you in TensorBoard
        self.log('val_loss', loss, on_epoch=True, prog_bar=True, logger=True)
        self.log('val_dice', dice, on_epoch=True, prog_bar=True, logger=True)
        return loss

    def train_dataloader(self):
        train_loader = DataLoader(self.train_ds,
                                  batch_size=1, #self.hparams.batch_size,
                                  num_workers=2,
                                  pin_memory=True
                                  )
        return train_loader

    def val_dataloader(self):
        val_loader = DataLoader(self.val_ds,
                                  batch_size=1,#self.hparams.batch_size,
                                  num_workers=2,
                                  pin_memory=True
                                )
rijobro commented 3 years ago

Hi, what makes you think this is related to RandCropByPosNegLabeld?

This part of the error:

output = model_step(*args)`` 
  File "/home/ol18/Codes/TractUncertainty/segmentation_sh_torchIO_PIL MONAI.py", line 224, in training_step
    print('train',batch.keys())
AttributeError: 'list' object has no attribute 'keys'

implies to me that it's trying to use a dictionary-based transform on a list. I.e., subjects_list[TRAINING] looks to be a list as opposed to a dictionary. is that correct?

OeslleLucena commented 3 years ago

Hi @rijobr! You're right, I just don't understand why this is happening. I swap subjects_list[TRAINING] for subjects_list[VALIDATION] to see if the problem was with subjects_list[TRAINING] but the error is the same. Any idea what could potentially be?