Closed Bananaspirit closed 9 months ago
@Bananaspirit thanks for raising this issue. Actually, torchvision's Compose doesn't require the list of transforms to be of a specific type. However, we indeed were not clear that the segmentaiton transforms should inherit from SegmentationTransform, or at least obey the dictionary way of passing image and mask. And so, the argument wasn't meant for torchvision transforms that would be applied seperately. For example, its also the place where you would want to pass augmentations which have a random characteristics. If you were to apply these seperately we would get undesired behaviour, so the proposed solution is not a good fit here. You can, however use the sample_transform in order to apply transforms on the image only.
You should know a PR wihch takes out alot of this logic is in the works, with the goal of migrating to Albumentations transforms eventually .
@Bananaspirit tldr; use one of the SegmentationTransform
provided within SuperGradients, or alternatively use the Albumentation transforms like in this example: https://github.com/Deci-AI/super-gradients/blob/4c32a698f54945f60d5edb7395906735283f45a2/src/super_gradients/recipes/dataset_params/cityscapes_regseg48_dataset_params.yaml#L13-L17
I'm closing this Issue because there is no followup, feel free to reopen it if you have more questions
đ Describe the bug
To train DDR-NET, I used the CoCoSegmentationDataSet class to initialize the training set and the validation set. During initialization, I encountered the problem that the images in the dataset were not the same size, so there was a need for resizing. I implemented the resize using the torchvision.transforms module:
However, I got the following error:
I found the following error in the function
_transform_image_and_mask
, in classSegmentationDataSet
, in file destinationsuper_gradients/training/datasets/segmentation_datasets/segmentation_dataset.py
:If you look at what the variable stores
self.transorms
, it stores the following:self.transforms = transform.Compose(transforms if transforms else [])
Accordingly we transfer to transform.Compose() a dictionary, but func expected a torch.tensor. So, I propose to fix this function and bring it to the following form:This working for me! Best wishes to the project team!
Versions