Closed dlabella29 closed 4 years ago
Hi @dlabella29 ,
Thanks for your interest and experiments here.
Could you please remove Spacingd
and CropForegroundd
then try again?
See whether you still can't train with other datasets.
If still wrong, add DataStatsd
transform before ScaleIntensityRanged
to print out debug information.
Thanks.
Hi @Nic-Ma ,
I tried removing CropForeground only for the spleen dataset, and it still worked. When I also tried removing SpacingD from the spleen dataset, it did not work anymore... It seems anytime SpacindD is not perfectly correct for any dataset, the model cannot train due to either the loss function calculation or data loading into the model.
(DataStatsd(keys=['image', 'label'], data_value=False) added before ScaleIntensityRanged)
DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 112) Value range: (-1024.0, 3071.0) DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 112) Value range: (0.0, 1.0) DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 94) Value range: (-1024.0, 1349.0) DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 94) Value range: (0.0, 1.0) DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 88) Value range: (-1024.0, 1413.0) DEBUG:DataStats:Data statistics: Shape: (1, 512, 512, 88) Value range: (0.0, 1.0)
...
...
epoch 1/100
Traceback (most recent call last):
File "/home/USER/PycharmProjects/MONAI/spleen_segmentation_3d.py", line 241, in
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
data = fetcher.fetch(index)
File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in
epoch 1/100
Traceback (most recent call last):
File "/home/USER/PycharmProjects/MONAI/segmentTestDL.py", line 241, in
################################3 I am also not able to use other datasets with the same changes to transform. Similar error messages.
Any insight is appreciated. Thanks, Dom
Hi @dlabella29 ,
For your 2 problems, as the error message in your log said:
SpacingD
transform, some images are smaller than 96 at D dim.
You can try to use a smaller crop size, like (64, 64, 64) maybe.RandCropByPosNegLabeld
, you need to use Resized
transform or other crop transform to make sure the images are in same size, otherwise, they can't be stacked as batch data.Thanks.
Hi @Nic-Ma
I figured out the issue. Your recommendation to use DataStatsd let me realize that the other organs had a label intensity of [0,255] instead of [0,1]. Using the ScaleIntensity transform to go to [0,1] allowed the model to train successfully.
Thanks for the help!
Dom
Cool! Please feel free to submit issue if you face any other problem or question. Thanks.
Hi Nic,
I have encountered another issue. I have been training ok with most nifti files in a certain organ dataset, however when I add in certain files I begin to receive a "roi end out of image space" error message.
I have tried modifying the CropForegroundd, CenterSpatialCropd, SpatialPadd. BorderPadd, Spacingdd, and spatial_size within RandCropByPosNegLabeld transforms.
The problematic files and labels are good nifti files, so I'm not sure what is causing the roi end out of image space error.
Traceback (most recent call last): File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in apply_transform return transform(data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/dictionary.py", line 349, in call d[key] = cropper(d[key]) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/array.py", line 240, in call assert np.all(max_end[:sd] >= self.roi_end[:sd]), "roi end out of image space." AssertionError: roi end out of image space.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in applytransform return transform(data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/compose.py", line 229, in call input = apply_transform(transform, input) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 279, in apply_transform raise type(e)(f"Applying transform {transform}.").with_traceback(e.traceback) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in apply_transform return transform(data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/dictionary.py", line 349, in call d[key] = cropper(d[key]) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/array.py", line 240, in call assert np.all(max_end[:sd] >= self.roi_end[:sd]), "roi end out of image space." AssertionError: Applying transform <monai.transforms.croppad.dictionary.CropForegroundd object at 0x7f2fef7e8460>.
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/home/dominic/PycharmProjects/MONAI/spleen_segmentation_3d.py", line 208, in check_data = monai.utils.misc.first(check_loader) File "/home/dominic/PycharmProjects/MONAI/monai/utils/misc.py", line 41, in first for i in iterable: File "/usr/lib/python3/dist-packages/torch/utils/data/dataloader.py", line 345, in next data = self._next_data() File "/usr/lib/python3/dist-packages/torch/utils/data/dataloader.py", line 385, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/lib/python3/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/lib/python3/dist-packages/torch/utils/data/utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/dominic/PycharmProjects/MONAI/monai/data/dataset.py", line 56, in getitem data = apply_transform(self.transform, data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 279, in apply_transform raise type(e)(f"Applying transform {transform}.").with_traceback(e.traceback) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in apply_transform return transform(data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/compose.py", line 229, in call input = apply_transform(transform, input) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 279, in apply_transform raise type(e)(f"Applying transform {transform}.").with_traceback(e.traceback) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/utils.py", line 277, in apply_transform return transform(data) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/dictionary.py", line 349, in call d[key] = cropper(d[key]) File "/home/dominic/PycharmProjects/MONAI/monai/transforms/croppad/array.py", line 240, in call assert np.all(max_end[:sd] >= self.roi_end[:sd]), "roi end out of image space." AssertionError: Applying transform <monai.transforms.compose.Compose object at 0x7f2fef7e8580>.
Process finished with exit code 1
Let me know if you have any ideas.
Thanks!
Hi @dlabella29 ,
Please use DataStatsd
transform before the crop transform and check whether the image size is bigger than crop size.
If your image is smaller than crop size, you can reduce your crop size or pad your image before cropping.
Thanks.
Hi Nic, I have figured out the above and can close the ticket.
One question as well. If I wanted to cite MONAI as a reference for publication, is there a citation available?
Thanks
Cool, glad to see your update. @wyli , could you please help confirm the citation?
Thanks.
the citation could be something like: The MONAI Consortium, Project MONAI: AI Toolkit for Healthcare Imaging, v0.3.0 https://github.com/Project-MONAI/MONAI I'm also trying to get a DOI for the repo https://guides.github.com/activities/citable-code/
the last comment was on citing the project, I'm closing it in favour of https://github.com/Project-MONAI/MONAI/issues/1166
Cool! Please feel free to submit issue if you face any other problem or question. Thanks.
Hi, I just started working on MONAI with different datasets, I am also facing same problem as like @dlabella29, I just tried with all suggestions given by you two. But till i am facing problem, please help me.
As per @dlabella29 suggestion I used this ScaleIntensity(minv=0.0, maxv=1.0, factor=None)
Traceback (most recent call last): File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/utils.py", line 361, in apply_transform return transform(data) File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/intensity/array.py", line 144, in call return rescale_array(img, self.minv, self.maxv, img.dtype) AttributeError: 'dict' object has no attribute 'dtype'
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/utils.py", line 361, in applytransform return transform(data) File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/compose.py", line 236, in call input = apply_transform(transform, input) File "/home/ccig/anaconda3/envs/3dsegmentation/lib/python3.8/site-packages/monai/transforms/utils.py", line 363, in apply_transform raise RuntimeError(f"applying transform {transform}") from e RuntimeError: applying transform <monai.transforms.intensity.array.ScaleIntensity object at 0x7faefe12b400>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ccig/PycharmProjects/3dsegmentation/spleen.py", line 112, in
Process finished with exit code 1
Hi @kvagdevi ,
Could you please help share your test program for debug?
I am afraid you used wrong transform, if your data is in dict format, you should consider to use ScaleIntensityd
instead of ScaleIntensity
.
Thanks.
Hi @kvagdevi ,
Could you please help share your test program for debug? I am afraid you used wrong transform, if your data is in dict format, you should consider to use
ScaleIntensityd
instead ofScaleIntensity
.Thanks.
Thank you for your quick reply, after several combinations and placed ScaleIntensitydinstead of
ScaleIntensity, then program running successfully with same parameters of Spleen, but after all epochs, dice coefficient is zero for all epchos and in output images there are many repeated input images. please help me. When I tried without ScaleIntensity and ScaleIntensityd, I got an error at loss function with out_channels=2, if i changed out_channels=1, no issue at loss function but problem is at val_labels = post_label(val_labels). In both cases index 2 is out of bound for 1 in dimension 2. Please resolve these issues. here is my part of code.
train_transforms = Compose( [ LoadImaged(keys=["image", "label"]), AddChanneld(keys=["image", "label"]), Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 2.0), mode=("bilinear", "nearest")), Orientationd(keys=["image", "label"], axcodes="RAS"),
ScaleIntensityd(keys=["image", "label"],minv=0.0, maxv=1.0, factor=None),
ScaleIntensityRanged(
keys=["image"], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True,
),
CropForegroundd(keys=["image", "label"], source_key="image"),
Resized(keys=["image"], spatial_size=(96, 96, 96), mode="nearest", align_corners=None),
# user can also add other random transforms
RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0, spatial_size=(512, 512, 96), rotate_range=(0, 0, np.pi/15), scale_range=(0.1, 0.1, 0.1)),
ToTensord(keys=["image", "label"]),
]
) val_transforms = Compose( [ LoadImaged(keys=["image", "label"]), AddChanneld(keys=["image", "label"]), Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 2.0), mode=("bilinear", "nearest")), Orientationd(keys=["image", "label"], axcodes="RAS"), ScaleIntensityd(keys=["image", "label"],minv=0.0, maxv=1.0, factor=None),
ScaleIntensityRanged(
keys=["image"], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True,
),
CropForegroundd(keys=["image", "label"], source_key="image"),
Resized(keys=["image"], spatial_size=(96, 96, 96), mode="nearest", align_corners=None),
RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0, spatial_size=(512, 512, 96), rotate_range=(0, 0, np.pi / 15), scale_range=(0.1, 0.1, 0.1)),
ToTensord(keys=["image", "label"]),
]
)
Hello,
I am running into similar issues while using RandCropByPosNegLabeld and trying to use DataStatsd to debug the input shape and range during the transforms. Does DataStatsd shows debug information during training or can it show debug information after running DataLoader? I am not sure how to ensure that DataStats prints out the debug information.
Hi, the output of RandCropByPosNegLabeld
is a list of the crops. The collator in the dataloader then takes care of turning lists (output of RandCropByPosNegLabeld
) and dictionaries (output of most other dictionary transforms) into tensors such that the output is the same. Hence, for normal use you wouldn't notice the difference between the outputs. However, DataStatsd
is being called before the dataloader, and it expects a dictionary as input, which explains your error.
The simplest solution here is to just move your DataStatsd
ahead of your RandCropByPosNegLabeld
transformation.
@Nic-Ma We could modify the output of RandCropByPosNegLabeld
like this:
# if only 1 sample requested, no point returning list
if self.num_samples == 1:
return results[0]
return results
In this fashion, the user could set num_samples=1
for debugging and place DataStatsd
after. What do you think?
I converted .nii files to .png for the detection purpose, detected few slices are once again packed to .nii format for compatibility to MONAI. At Spacingd: ValueError: theta must be Nx3x3 or Nx4x4, got torch.Size([1, 6, 6]). then I commented Spacingd. At Orientationd: ValueError: theta must be Nx3x3 or Nx4x4, got torch.Size([1, 6, 6]). Then I commented Orientationd. At Resized: ValueError: len(spatial_size) must be greater or equal to img spatial dimensions, got spatial_size=3 img=5.
Before to Resized, i used ScaleIntensityd, ScaleIntensityRanged, CropForegroundd,
without Resized: RuntimeError: stack expects each tensor to be equal size, but got [1, 134, 170, 80, 1, 3] at entry 0 and [1, 134, 170, 41, 1, 3] at entry 1.
I used following code for .png to .nii
path= '/home/ccig/PycharmProjects/yolov5_detection/Trail_folder_waste/masks_final_images/' for root, dirs, files in os.walk(path): for dir in dirs: dir_path_1 = os.path.join(path, dir + '/') file_names = sorted(glob.glob(dir_path_1 + '*.png')) print(dir) reader = sitk.ImageSeriesReader() reader.SetFileNames(file_names) vol = reader.Execute() sitk.WriteImage(vol, f'{dir}.nii.gz') Please resolve my issue and help me, thank you in adavance.
Hi @kvagdevi, I'm not completely sure what you're asking. I think you have transforms in some order and you get errors depending on which ones are commented out or not? I also don't understand the motivation for your nii->png->nii conversion.
If you would like help, could you create a new thread in our Discussions page as this is a) unrelated to the current topic and b) requires more information for us to help you. It would also be beneficial to include a minimum working example using a publicly available dataset.
RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0, spatial_size=(512, 512, 96), rotate_range=(0, 0, np.pi/15), scale_range=(0.1, 0.1, 0.1)), ToTensord(keys=["image", "label"]),
Hi, I am also facing similar issue. Have you found any solution to your problem. Will you be able to help..!?
Contact me at 9441177331
On Sat, 29 Jan, 2022, 11:54 am Monalisa Nayak, @.***> wrote:
RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0, spatial_size=(512, 512, 96), rotate_range=(0, 0, np.pi/15), scale_range=(0.1, 0.1, 0.1)), ToTensord(keys=["image", "label"]),
Hi, I am also facing similar issue. Have you found any solution to your problem. Will you be able to help..!?
— Reply to this email directly, view it on GitHub https://github.com/Project-MONAI/MONAI/issues/876#issuecomment-1024847266, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANEMQ3X2GRVPJCHM6CI4PRTUYOB3DANCNFSM4PZMDC4Q . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you were mentioned.Message ID: @.***>
Hi @dlabella29 ,
Please use
DataStatsd
transform before the crop transform and check whether the image size is bigger than crop size. If your image is smaller than crop size, you can reduce your crop size or pad your image before cropping.Thanks.
Thank you so much, saved the day
**Issue Unable to use other datasets besides Task09_Spleen from medicaldecathlon when running the spleen_segmentation_3d script. I believe the issue is with the SpacingD, pixdim transform, but could be elsewhere. The other organ dataset has same file tree structure. I also tried the other dataset with resampled .nii images and labels to 1x1x1.
*Note: I imported into Pycharm and ran as .py file. Spleen dataset runs well with Dice > 0.94 after 600 epochs.
**Steps to reproduce the behavior:
2 working for spleen: Download and reference the Task09_Spleen dataset from medicaldecathalon. data_root = r'/home/USER/PycharmProjects/MONAI/Task09_Spleen' train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '.nii.gz'))) train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '.nii.gz')))
2 not working for other organ: data_root = r'/home/USER/PycharmProjects/MONAI/Task_other_organs' train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '.nii.gz'))) train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '.nii.gz')))
3 working for spleen: Use the default transforms: train_transforms = Compose([
LoadNiftid(keys=['image', 'label']), AddChanneld(keys=['image', 'label']), Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 2), mode=('bilinear', 'nearest')), Orientationd(keys=['image', 'label'], axcodes='RAS'), ScaleIntensityRanged(keys=['image'], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True), CropForegroundd(keys=['image', 'label'], source_key='image'), RandCropByPosNegLabeld(keys=['image', 'label'], label_key='label', spatial_size=(96, 96, 96), pos=1,neg=1, num_samples=4, image_key='image', image_threshold=0), ToTensord(keys=['image', 'label']) ])
3 not working for other organs: I tried using modified transforms that use intensities ranging from 300 to 1600 for bone window. I believe my error is with the pixdim spacindD transform. I have tried the default of 1.5,1.5,2; and 1,1,1; and 3,3,4; and a number of other combinations. I have also tried using the spleen intensity range of -57 to 164.
])
Expected working behavior with spleen data
I successfully ran the spleen data with the above transforms from spleen_segmentation_3d: epoch 1/100 1/16, train_loss: 0.6632 2/16, train_loss: 0.6708 3/16, train_loss: 0.6756 4/16, train_loss: 0.6699 5/16, train_loss: 0.6500 6/16, train_loss: 0.6744 ...
However, when I try to load another dataset (not spleen), I encounter this error during epoch 1 with pixdim 1.5,1.5,2... ... epoch 1/100 Traceback (most recent call last): File "/home/USER/PycharmProjects/MONAI/spleen_segmentation_3d.py", line 239, in
loss = loss_function(outputs, labels)
File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, *kwargs)
File "/home/USER/PycharmProjects/MONAI/monai/losses/dice.py", line 132, in forward
intersection = torch.sum(target input, dim=reduce_axis)
RuntimeError: CUDA error: device-side assert triggered
I get this error if pixdim is 1,1,1. ... epoch 1/100 /build/pytorch-k8ICxt/pytorch-1.5.1+ds/aten/src/THC/THCTensorScatterGather.cu:190: THCudaTensor_scatterFillKernel: block: [1398,0,0], thread: [256,0,0] Assertion
indexValue >= 0 && indexValue < tensor.sizes[dim]
failed. /build/pytorch-k8ICxt/pytorch-1.5.1+ds/aten/src/THC/THCTensorScatterGather.cu:190: THCudaTensor_scatterFillKernel: block: [1416,0,0], thread: [352,0,0] AssertionindexValue >= 0 && indexValue < tensor.sizes[dim]
failed. /build/pytorch-k8ICxt/pytorch-1.5.1+ds/aten/src/THC/THCTensorScatterGather.cu:190: THCudaTensor_scatterFillKernel: block: [1416,0,0], thread: [353,0,0] AssertionindexValue >= 0 && indexValue < tensor.sizes[dim]
failed.I get this error if pixdim is 1.5,1.5,2. ... epoch 1/100 Traceback (most recent call last): File "/home/USER/PycharmProjects/MONAI/spleen_segmentation_3d.py", line 239, in
loss = loss_function(outputs, labels)
File "/usr/lib/python3/dist-packages/torch/nn/modules/module.py", line 550, in call
result = self.forward(*input, *kwargs)
File "/home/USER/PycharmProjects/MONAI/monai/losses/dice.py", line 132, in forward
intersection = torch.sum(target input, dim=reduce_axis)
RuntimeError: CUDA error: device-side assert triggered
**Additional Information. This is the image shape when checking the ds in spleen. I beileve this shape is for the first image in val_files. check_ds = monai.data.Dataset(data=val_files, transform=val_transforms) check_loader = monai.data.DataLoader(check_ds, batch_size=1) check_data = monai.utils.misc.first(check_loader) image, label = (check_data['image'][0][0], check_data['label'][0][0]) print(f"image shape: {image.shape}, label shape: {label.shape}")
output for spleen data>>>> image shape: torch.Size([226, 157, 113]), label shape: torch.Size([226, 157, 113])
This is the same respective image shape for other organ using pixdim 1.5,1.5,2. print(f"image shape: {image.shape}, label shape: {label.shape}") output for other organ data>>>> image shape: torch.Size([290, 232, 178]), label shape: torch.Size([290, 232, 178])
I also tried to use and not use cache of data. Either way worked for spleen...
vs. train_ds = monai.data.Dataset(data=train_files, transform=train_transforms)
vs. val_ds = monai.data.Dataset(data=val_files, transform=val_transforms)
I also tried running on a Lambda 2x Titan RTX. Spleen worked. Other organ did not.
Please let me know if anyone has any idea how to get another organ segmentation training. Let me know if there is someway I can check the image sizes to appropriately assign pixdim in SpacingD transform if that is indeed the problem. Thanks, Dom