Project-MONAI / MONAI

AI Toolkit for Healthcare Imaging
https://monai.io/
Apache License 2.0
5.86k stars 1.08k forks source link

Monai segmentation. Unable to use datasets besides Spleen. #876

Closed dlabella29 closed 4 years ago

dlabella29 commented 4 years ago

**Issue Unable to use other datasets besides Task09_Spleen from medicaldecathlon when running the spleen_segmentation_3d script. I believe the issue is with the SpacingD, pixdim transform, but could be elsewhere. The other organ dataset has same file tree structure. I also tried the other dataset with resampled .nii images and labels to 1x1x1.

*Note: I imported into Pycharm and ran as .py file. Spleen dataset runs well with Dice > 0.94 after 600 epochs.

**Steps to reproduce the behavior:

  1. Open the spleen_segmenatation_3d.py script.

2 working for spleen: Download and reference the Task09_Spleen dataset from medicaldecathalon. data_root = r'/home/USER/PycharmProjects/MONAI/Task09_Spleen' train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '.nii.gz'))) train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '.nii.gz')))

2 not working for other organ: data_root = r'/home/USER/PycharmProjects/MONAI/Task_other_organs' train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '.nii.gz'))) train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '.nii.gz')))

3 working for spleen: Use the default transforms: train_transforms = Compose([
LoadNiftid(keys=['image', 'label']), AddChanneld(keys=['image', 'label']), Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 2), mode=('bilinear', 'nearest')), Orientationd(keys=['image', 'label'], axcodes='RAS'), ScaleIntensityRanged(keys=['image'], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True), CropForegroundd(keys=['image', 'label'], source_key='image'), RandCropByPosNegLabeld(keys=['image', 'label'], label_key='label', spatial_size=(96, 96, 96), pos=1,neg=1, num_samples=4, image_key='image', image_threshold=0), ToTensord(keys=['image', 'label']) ])

3 not working for other organs: I tried using modified transforms that use intensities ranging from 300 to 1600 for bone window. I believe my error is with the pixdim spacindD transform. I have tried the default of 1.5,1.5,2; and 1,1,1; and 3,3,4; and a number of other combinations. I have also tried using the spleen intensity range of -57 to 164.

    train_transforms = Compose([
    LoadNiftid(keys=['image', 'label']),
    AddChanneld(keys=['image', 'label']),
    Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 2), mode=('bilinear', 'nearest')),
    Orientationd(keys=['image', 'label'], axcodes='RAS'),
    #ScaleIntensityRanged(keys=['image'], a_min=300, a_max=1100, b_min=0.0, b_max=1.0, clip=True),
    ScaleIntensityRanged(keys=['image'], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),
    CropForegroundd(keys=['image', 'label'], source_key='image'),
    RandCropByPosNegLabeld(keys=['image', 'label'], label_key='label', spatial_size=(96, 96, 96), pos=1,neg=1, num_samples=4, image_key='image', image_threshold=0),
    ToTensord(keys=['image', 'label'])

])

Expected working behavior with spleen data

I successfully ran the spleen data with the above transforms from spleen_segmentation_3d: epoch 1/100 1/16, train_loss: 0.6632 2/16, train_loss: 0.6708 3/16, train_loss: 0.6756 4/16, train_loss: 0.6699 5/16, train_loss: 0.6500 6/16, train_loss: 0.6744 ...

However, when I try to load another dataset (not spleen), I encounter this error during epoch 1 with pixdim 1.5,1.5,2... ... epoch 1/100 Traceback (most recent call last): File "/home/USER/PycharmProjects/MONAI/spleen_segmentation_3d.py", line 239, in loss = loss_function(outputs, labels) File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 550, in call result = self.forward(*input, *kwargs) File "/home/USER/PycharmProjects/MONAI/monai/losses/dice.py", line 132, in forward intersection = torch.sum(target input, dim=reduce_axis) RuntimeError: CUDA error: device-side assert triggered

I get this error if pixdim is 1,1,1. ... epoch 1/100 /build/pytorch-k8ICxt/pytorch-1.5.1+ds/aten/src/THC/THCTensorScatterGather.cu:190: THCudaTensor_scatterFillKernel: block: [1398,0,0], thread: [256,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed. /build/pytorch-k8ICxt/pytorch-1.5.1+ds/aten/src/THC/THCTensorScatterGather.cu:190: THCudaTensor_scatterFillKernel: block: [1416,0,0], thread: [352,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed. /build/pytorch-k8ICxt/pytorch-1.5.1+ds/aten/src/THC/THCTensorScatterGather.cu:190: THCudaTensor_scatterFillKernel: block: [1416,0,0], thread: [353,0,0] Assertion indexValue >= 0 && indexValue < tensor.sizes[dim] failed.

I get this error if pixdim is 1.5,1.5,2. ... epoch 1/100 Traceback (most recent call last): File "/home/USER/PycharmProjects/MONAI/spleen_segmentation_3d.py", line 239, in loss = loss_function(outputs, labels) File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 550, in call result = self.forward(*input, *kwargs) File "/home/USER/PycharmProjects/MONAI/monai/losses/dice.py", line 132, in forward intersection = torch.sum(target input, dim=reduce_axis) RuntimeError: CUDA error: device-side assert triggered

**Additional Information. This is the image shape when checking the ds in spleen. I beileve this shape is for the first image in val_files. check_ds = monai.data.Dataset(data=val_files, transform=val_transforms) check_loader = monai.data.DataLoader(check_ds, batch_size=1) check_data = monai.utils.misc.first(check_loader) image, label = (check_data['image'][0][0], check_data['label'][0][0]) print(f"image shape: {image.shape}, label shape: {label.shape}")

output for spleen data>>>> image shape: torch.Size([226, 157, 113]), label shape: torch.Size([226, 157, 113])

This is the same respective image shape for other organ using pixdim 1.5,1.5,2. print(f"image shape: {image.shape}, label shape: {label.shape}") output for other organ data>>>> image shape: torch.Size([290, 232, 178]), label shape: torch.Size([290, 232, 178])

I also tried to use and not use cache of data. Either way worked for spleen...

train_ds = monai.data.CacheDataset(data=train_files, transform=train_transforms, cache_rate=1.00, num_workers=4)

vs. train_ds = monai.data.Dataset(data=train_files, transform=train_transforms)

val_ds = monai.data.CacheDataset(data=val_files, transform=val_transforms, cache_rate=1.0, num_workers=4)

vs. val_ds = monai.data.Dataset(data=val_files, transform=val_transforms)

Please let me know if anyone has any idea how to get another organ segmentation training. Let me know if there is someway I can check the image sizes to appropriately assign pixdim in SpacingD transform if that is indeed the problem. Thanks, Dom

Nic-Ma commented 4 years ago

Hi @dlabella29 ,

Thanks for your interest and experiments here. Could you please remove Spacingd and CropForegroundd then try again? See whether you still can't train with other datasets. If still wrong, add DataStatsd transform before ScaleIntensityRanged to print out debug information.

Thanks.

dlabella29 commented 4 years ago

Hi @Nic-Ma ,

I tried removing CropForeground only for the spleen dataset, and it still worked. When I also tried removing SpacingD from the spleen dataset, it did not work anymore... It seems anytime SpacindD is not perfectly correct for any dataset, the model cannot train due to either the loss function calculation or data loading into the model.

(DataStatsd(keys=['image', 'label'], data_value=False) added before ScaleIntensityRanged)

DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 112) Value range: (-1024.0, 3071.0) DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 112) Value range: (0.0, 1.0) DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 94) Value range: (-1024.0, 1349.0) DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 94) Value range: (0.0, 1.0) DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 88) Value range: (-1024.0, 1413.0) DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 88) Value range: (0.0, 1.0)

...

...

32/32 Load and cache transformed data: [==============================] line 201 9/9 Load and cache transformed data: [==============================] line 157

epoch 1/100 Traceback (most recent call last): File "/home/USER/PycharmProjects/MONAI/spleen_segmentation_3d.py", line 241, in for batch_data in train_loader: File "/usr/lib/python3/dist-packages/torch/utils/data/dataloader.py", line 345, in next data = self._next_data() File "/usr/lib/python3/dist-packages/torch/utils/data/dataloader.py", line 856, in _next_data return self._process_data(data) File "/usr/lib/python3/dist-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/usr/lib/python3/dist-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/USER/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in apply_transform return transform(data) File "/home/USER/PycharmProjects/MONAI/monai/transforms/croppad/dictionary.py", line 416, in call self.randomize(label, image) File "/home/USER/PycharmProjects/MONAI/monai/transforms/croppad/dictionary.py", line 408, in randomize self.centers = generate_pos_neg_label_crop_centers( File "/home/USER/PycharmProjects/MONAI/monai/transforms/utils.py", line 203, in generate_pos_neg_label_crop_centers raise ValueError("The proposed roi is larger than the image.") ValueError: The proposed roi is larger than the image.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/USER/PycharmProjects/MONAI/monai/data/dataset.py", line 316, in getitem data = apply_transform(_transform, data) File "/home/USER/PycharmProjects/MONAI/monai/transforms/utils.py", line 279, in apply_transform raise type(e)(f"Applying transform {transform}.").with_traceback(e.traceback) File "/home/USER/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in apply_transform return transform(data) File "/home/USER/PycharmProjects/MONAI/monai/transforms/croppad/dictionary.py", line 416, in call self.randomize(label, image) File "/home/USER/PycharmProjects/MONAI/monai/transforms/croppad/dictionary.py", line 408, in randomize self.centers = generate_pos_neg_label_crop_centers( File "/home/USER/PycharmProjects/MONAI/monai/transforms/utils.py", line 203, in generate_pos_neg_label_crop_centers raise ValueError("The proposed roi is larger than the image.") ValueError: Applying transform <monai.transforms.croppad.dictionary.RandCropByPosNegLabeld object at 0x7fea3d50ef70>.

###################################### ###################################### Reran without RandCropByPosNegLabel:

epoch 1/100 Traceback (most recent call last): File "/home/USER/PycharmProjects/MONAI/segmentTestDL.py", line 241, in for batch_data in train_loader: File "/usr/lib/python3/dist-packages/torch/utils/data/dataloader.py", line 345, in next data = self._next_data() File "/usr/lib/python3/dist-packages/torch/utils/data/dataloader.py", line 856, in _next_data return self._process_data(data) File "/usr/lib/python3/dist-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/usr/lib/python3/dist-packages/torch/_utils.py", line 395, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/home/USER/PycharmProjects/MONAI/monai/data/utils.py", line 222, in list_data_collate return default_collate(data) File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate return {key: default_collate([d[key] for d in batch]) for key in elem} File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/collate.py", line 74, in return {key: default_collate([d[key] for d in batch]) for key in elem} File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/collate.py", line 55, in default_collate return torch.stack(batch, 0, out=out) RuntimeError: stack expects each tensor to be equal size, but got [1, 316, 316, 206] at entry 0 and [1, 253, 253, 98] at entry 1

################################3 I am also not able to use other datasets with the same changes to transform. Similar error messages.

Any insight is appreciated. Thanks, Dom

Nic-Ma commented 4 years ago

Hi @dlabella29 ,

For your 2 problems, as the error message in your log said:

  1. "The proposed roi is larger than the image." If without SpacingD transform, some images are smaller than 96 at D dim. You can try to use a smaller crop size, like (64, 64, 64) maybe.
  2. If don't use RandCropByPosNegLabeld, you need to use Resized transform or other crop transform to make sure the images are in same size, otherwise, they can't be stacked as batch data.

Thanks.

dlabella29 commented 4 years ago

Hi @Nic-Ma

I figured out the issue. Your recommendation to use DataStatsd let me realize that the other organs had a label intensity of [0,255] instead of [0,1]. Using the ScaleIntensity transform to go to [0,1] allowed the model to train successfully.

Thanks for the help!

Dom

Nic-Ma commented 4 years ago

Cool! Please feel free to submit issue if you face any other problem or question. Thanks.

dlabella29 commented 4 years ago

Hi Nic,

I have encountered another issue. I have been training ok with most nifti files in a certain organ dataset, however when I add in certain files I begin to receive a "roi end out of image space" error message.

I have tried modifying the CropForegroundd, CenterSpatialCropd, SpatialPadd. BorderPadd, Spacingdd, and spatial_size within RandCropByPosNegLabeld transforms.

The problematic files and labels are good nifti files, so I'm not sure what is causing the roi end out of image space error.

Traceback (most recent call last): File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in apply_transform return transform(data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/dictionary.py", line 349, in call d[key] = cropper(d[key]) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/array.py", line 240, in call assert np.all(max_end[:sd] >= self.roi_end[:sd]), "roi end out of image space." AssertionError: roi end out of image space.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in applytransform return transform(data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/compose.py", line 229, in call input = apply_transform(transform, input) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 279, in apply_transform raise type(e)(f"Applying transform {transform}.").with_traceback(e.traceback) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in apply_transform return transform(data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/dictionary.py", line 349, in call d[key] = cropper(d[key]) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/array.py", line 240, in call assert np.all(max_end[:sd] >= self.roi_end[:sd]), "roi end out of image space." AssertionError: Applying transform <monai.transforms.croppad.dictionary.CropForegroundd object at 0x7f2fef7e8460>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/dominic/PycharmProjects/MONAI/spleen_segmentation_3d.py", line 208, in check_data = monai.utils.misc.first(check_loader) File "/home/dominic/PycharmProjects/MONAI/monai/utils/misc.py", line 41, in first for i in iterable: File "/usr/lib/python3/dist-packages/torch/utils/data/dataloader.py", line 345, in next data = self._next_data() File "/usr/lib/python3/dist-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/lib/python3/dist-packages/torch/utils/data/utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dominic/PycharmProjects/MONAI/monai/data/dataset.py", line 56, in getitem data = apply_transform(self.transform, data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 279, in apply_transform raise type(e)(f"Applying transform {transform}.").with_traceback(e.traceback) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in apply_transform return transform(data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/compose.py", line 229, in call input = apply_transform(transform, input) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 279, in apply_transform raise type(e)(f"Applying transform {transform}.").with_traceback(e.traceback) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in apply_transform return transform(data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/dictionary.py", line 349, in call d[key] = cropper(d[key]) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/array.py", line 240, in call assert np.all(max_end[:sd] >= self.roi_end[:sd]), "roi end out of image space." AssertionError: Applying transform <monai.transforms.compose.Compose object at 0x7f2fef7e8580>.

Process finished with exit code 1

Let me know if you have any ideas.

Thanks!

Nic-Ma commented 4 years ago

Hi @dlabella29 ,

Please use DataStatsd transform before the crop transform and check whether the image size is bigger than crop size. If your image is smaller than crop size, you can reduce your crop size or pad your image before cropping.

Thanks.

dlabella29 commented 4 years ago

Hi Nic, I have figured out the above and can close the ticket.

One question as well. If I wanted to cite MONAI as a reference for publication, is there a citation available?

Thanks

Nic-Ma commented 4 years ago

Cool, glad to see your update. @wyli , could you please help confirm the citation?

Thanks.

wyli commented 4 years ago

the citation could be something like: The MONAI Consortium, Project MONAI: AI Toolkit for Healthcare Imaging, v0.3.0 https://github.com/Project-MONAI/MONAI I'm also trying to get a DOI for the repo https://guides.github.com/activities/citable-code/

wyli commented 4 years ago

the last comment was on citing the project, I'm closing it in favour of https://github.com/Project-MONAI/MONAI/issues/1166

kvagdevi commented 3 years ago

Cool! Please feel free to submit issue if you face any other problem or question. Thanks.

Hi, I just started working on MONAI with different datasets, I am also facing same problem as like @dlabella29, I just tried with all suggestions given by you two. But till i am facing problem, please help me.
As per @dlabella29 suggestion I used this ScaleIntensity(minv=0.0, maxv=1.0, factor=None)

Traceback (most recent call last): File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/utils.py", line 361, in apply_transform return transform(data) File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/intensity/array.py", line 144, in call return rescale_array(img, self.minv, self.maxv, img.dtype) AttributeError: 'dict' object has no attribute 'dtype'

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/utils.py", line 361, in applytransform return transform(data) File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/compose.py", line 236, in call input = apply_transform(transform, input) File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/utils.py", line 363, in apply_transform raise RuntimeError(f"applying transform {transform}") from e RuntimeError: applying transform <monai.transforms.intensity.array.ScaleIntensity object at 0x7faefe12b400>

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/ccig/PycharmProjects/3dsegmentation/spleen.py", line 112, in check_data = first(check_loader) File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/utils/misc.py", line 46, in first for i in iterable: File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 435, in next data = self._next_data() File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/data/dataset.py", line 69, in getitem data = apply_transform(self.transform, data) File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/utils.py", line 363, in apply_transform raise RuntimeError(f"applying transform {transform}") from e RuntimeError: applying transform <monai.transforms.compose.Compose object at 0x7faefe12b5e0>

Process finished with exit code 1

Nic-Ma commented 3 years ago

Hi @kvagdevi ,

Could you please help share your test program for debug? I am afraid you used wrong transform, if your data is in dict format, you should consider to use ScaleIntensityd instead of ScaleIntensity.

Thanks.

kvagdevi commented 3 years ago

Hi @kvagdevi ,

Could you please help share your test program for debug? I am afraid you used wrong transform, if your data is in dict format, you should consider to use ScaleIntensityd instead of ScaleIntensity.

Thanks.

Thank you for your quick reply, after several combinations and placed ScaleIntensitydinstead ofScaleIntensity, then program running successfully with same parameters of Spleen, but after all epochs, dice coefficient is zero for all epchos and in output images there are many repeated input images. please help me. When I tried without ScaleIntensity and ScaleIntensityd, I got an error at loss function with out_channels=2, if i changed out_channels=1, no issue at loss function but problem is at val_labels = post_label(val_labels). In both cases index 2 is out of bound for 1 in dimension 2. Please resolve these issues. here is my part of code.

train_transforms = Compose( [ LoadImaged(keys=["image", "label"]), AddChanneld(keys=["image", "label"]), Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 2.0), mode=("bilinear", "nearest")), Orientationd(keys=["image", "label"], axcodes="RAS"),

DataStatsd(keys=['image', 'label'], data_value=False),

    ScaleIntensityd(keys=["image", "label"],minv=0.0, maxv=1.0, factor=None),
    ScaleIntensityRanged(
        keys=["image"], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True,
    ),
    CropForegroundd(keys=["image", "label"], source_key="image"),

    Resized(keys=["image"], spatial_size=(96, 96, 96), mode="nearest", align_corners=None),

    # user can also add other random transforms
    RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0, spatial_size=(512, 512, 96),                      rotate_range=(0, 0, np.pi/15), scale_range=(0.1, 0.1, 0.1)),
    ToTensord(keys=["image", "label"]),
]

) val_transforms = Compose( [ LoadImaged(keys=["image", "label"]), AddChanneld(keys=["image", "label"]), Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 2.0), mode=("bilinear", "nearest")), Orientationd(keys=["image", "label"], axcodes="RAS"), ScaleIntensityd(keys=["image", "label"],minv=0.0, maxv=1.0, factor=None),

DataStatsd(keys=['image', 'label'], data_value=False),

    ScaleIntensityRanged(
        keys=["image"], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True,
    ),
    CropForegroundd(keys=["image", "label"], source_key="image"),
    Resized(keys=["image"], spatial_size=(96, 96, 96), mode="nearest", align_corners=None),
    RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0, spatial_size=(512, 512, 96),                     rotate_range=(0, 0, np.pi / 15), scale_range=(0.1, 0.1, 0.1)),
    ToTensord(keys=["image", "label"]),
]

)

Vivekrpg commented 3 years ago

Hello,

I am running into similar issues while using RandCropByPosNegLabeld and trying to use DataStatsd to debug the input shape and range during the transforms. Does DataStatsd shows debug information during training or can it show debug information after running DataLoader? I am not sure how to ensure that DataStats prints out the debug information.

rijobro commented 3 years ago

Hi, the output of RandCropByPosNegLabeld is a list of the crops. The collator in the dataloader then takes care of turning lists (output of RandCropByPosNegLabeld) and dictionaries (output of most other dictionary transforms) into tensors such that the output is the same. Hence, for normal use you wouldn't notice the difference between the outputs. However, DataStatsd is being called before the dataloader, and it expects a dictionary as input, which explains your error.

The simplest solution here is to just move your DataStatsd ahead of your RandCropByPosNegLabeld transformation.

@Nic-Ma We could modify the output of RandCropByPosNegLabeld like this:

# if only 1 sample requested, no point returning list
if self.num_samples == 1:
    return results[0]

return results

In this fashion, the user could set num_samples=1 for debugging and place DataStatsd after. What do you think?

kvagdevi commented 3 years ago

I converted .nii files to .png for the detection purpose, detected few slices are once again packed to .nii format for compatibility to MONAI. At Spacingd: ValueError: theta must be Nx3x3 or Nx4x4, got torch.Size([1, 6, 6]). then I commented Spacingd. At Orientationd: ValueError: theta must be Nx3x3 or Nx4x4, got torch.Size([1, 6, 6]). Then I commented Orientationd. At Resized: ValueError: len(spatial_size) must be greater or equal to img spatial dimensions, got spatial_size=3 img=5.

Before to Resized, i used ScaleIntensityd, ScaleIntensityRanged, CropForegroundd,

without Resized: RuntimeError: stack expects each tensor to be equal size, but got [1, 134, 170, 80, 1, 3] at entry 0 and [1, 134, 170, 41, 1, 3] at entry 1.

I used following code for .png to .nii

To conver many .png in a folder to .nii file (conversion of .png to volume)

path= '/home/ccig/PycharmProjects/yolov5_detection/Trail_folder_waste/masks_final_images/' for root, dirs, files in os.walk(path): for dir in dirs: dir_path_1 = os.path.join(path, dir + '/') file_names = sorted(glob.glob(dir_path_1 + '*.png')) print(dir) reader = sitk.ImageSeriesReader() reader.SetFileNames(file_names) vol = reader.Execute() sitk.WriteImage(vol, f'{dir}.nii.gz') Please resolve my issue and help me, thank you in adavance.

rijobro commented 3 years ago

Hi @kvagdevi, I'm not completely sure what you're asking. I think you have transforms in some order and you get errors depending on which ones are commented out or not? I also don't understand the motivation for your nii->png->nii conversion.

If you would like help, could you create a new thread in our Discussions page as this is a) unrelated to the current topic and b) requires more information for us to help you. It would also be beneficial to include a minimum working example using a publicly available dataset.

monalisanayak107 commented 2 years ago

RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0, spatial_size=(512, 512, 96), rotate_range=(0, 0, np.pi/15), scale_range=(0.1, 0.1, 0.1)), ToTensord(keys=["image", "label"]),

Hi, I am also facing similar issue. Have you found any solution to your problem. Will you be able to help..!?

kvagdevi commented 2 years ago

Contact me at 9441177331

On Sat, 29 Jan, 2022, 11:54 am Monalisa Nayak, @.***> wrote:

RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0, spatial_size=(512, 512, 96), rotate_range=(0, 0, np.pi/15), scale_range=(0.1, 0.1, 0.1)), ToTensord(keys=["image", "label"]),

Hi, I am also facing similar issue. Have you found any solution to your problem. Will you be able to help..!?

— Reply to this email directly, view it on GitHub https://github.com/Project-MONAI/MONAI/issues/876#issuecomment-1024847266, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANEMQ3X2GRVPJCHM6CI4PRTUYOB3DANCNFSM4PZMDC4Q . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you were mentioned.Message ID: @.***>

YazdanSalimi commented 1 year ago

Hi @dlabella29 ,

Please use DataStatsd transform before the crop transform and check whether the image size is bigger than crop size. If your image is smaller than crop size, you can reduce your crop size or pad your image before cropping.

Thanks.

Thank you so much, saved the day