Deci-AI / super-gradients

Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
https://www.supergradients.com
Apache License 2.0
4.54k stars 496 forks source link

Incorrect application of image transformation and mask in the CoCoSegmentationDataSet class #1675

Closed Bananaspirit closed 9 months ago

Bananaspirit commented 10 months ago

🐛 Describe the bug

To train DDR-NET, I used the CoCoSegmentationDataSet class to initialize the training set and the validation set. During initialization, I encountered the problem that the images in the dataset were not the same size, so there was a need for resizing. I implemented the resize using the torchvision.transforms module:

train_set = CoCoSegmentationDataSet(root_dir='PATH', 
                                    samples_sub_directory='PATH', 
                                    targets_sub_directory='PATH', 
                                    list_file='PATH',
                                    transforms=[transform.Resize((960,1280))])

However, I got the following error:

Traceback (most recent call last):
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/train.py", line 108, in <module>
    trainer.train(model=model, training_params=train_params, train_loader=train_dataloader, valid_loader=val_dataloader)
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/super_gradients/training/sg_trainer/sg_trainer.py", line 1419, in train
    first_batch = next(iter(self.train_loader))
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
    data = self._next_data()
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
    return self._process_data(data)
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
    data.reraise()
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torch/_utils.py", line 694, in reraise
    raise exception
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/super_gradients/training/datasets/segmentation_datasets/segmentation_dataset.py", line 95, in __getitem__
    sample, target = self._transform_image_and_mask(sample, target)
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/super_gradients/training/datasets/segmentation_datasets/segmentation_dataset.py", line 210, in _transform_image_and_mask
    transformed = self.transforms({"image": image, "mask": mask})
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torchvision/transforms/transforms.py", line 95, in __call__
    img = t(img)
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torchvision/transforms/transforms.py", line 361, in forward
    return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias)
File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torchvision/transforms/functional.py", line 476, in resize
    _, image_height, image_width = get_dimensions(img)
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torchvision/transforms/functional.py", line 78, in get_dimensions
    return F_pil.get_dimensions(img)
  File "/home/banana/Docs/VScode/Python/RSM_projects/Auto_Pilot/Kromka_Semantic/ddr-net-semantic-hypothesis/ddr_hyp/lib/python3.10/site-packages/torchvision/transforms/_functional_pil.py", line 31, in get_dimensions
    raise TypeError(f"Unexpected type {type(img)}")
TypeError: Unexpected type <class 'dict'>

I found the following error in the function _transform_image_and_mask, in class SegmentationDataSet, in file destination super_gradients/training/datasets/segmentation_datasets/segmentation_dataset.py:

def _transform_image_and_mask(self, image, mask) -> tuple:
        """
        :param image:           The input image
        :param mask:            The input mask
        :return:                The transformed image, mask
        """
        # original code:
        transformed = self.transforms({"image": image, "mask": mask})
        return transformed["image"], transformed["mask"]

If you look at what the variable stores self.transorms, it stores the following: self.transforms = transform.Compose(transforms if transforms else []) Accordingly we transfer to transform.Compose() a dictionary, but func expected a torch.tensor. So, I propose to fix this function and bring it to the following form:

def _transform_image_and_mask(self, image, mask) -> tuple:
        """
        :param image:           The input image
        :param mask:            The input mask
        :return:                The transformed image, mask
        """
        # my code:
        return self.transforms(image), self.transforms(mask)

This working for me! Best wishes to the project team!

Versions

PyTorch version: 2.1.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.24.1
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1650 Ti
Nvidia driver version: 525.147.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      39 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             8
On-line CPU(s) list:                0-7
Vendor ID:                          GenuineIntel
Model name:                         Intel(R) Core(TM) i5-9300H CPU @ 2.40GHz
CPU family:                         6
Model:                              158
Thread(s) per core:                 2
Core(s) per socket:                 4
Socket(s):                          1
Stepping:                           10
CPU max MHz:                        4100.0000
CPU min MHz:                        800.0000
BogoMIPS:                           4800.00
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d arch_capabilities
Virtualization:                     VT-x
L1d cache:                          128 KiB (4 instances)
L1i cache:                          128 KiB (4 instances)
L2 cache:                           1 MiB (4 instances)
L3 cache:                           8 MiB (1 instance)
NUMA node(s):                       1
NUMA node0 CPU(s):                  0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit:        KVM: Mitigation: VMX disabled
Vulnerability L1tf:                 Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds:                  Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown:             Mitigation; PTI
Vulnerability Mmio stale data:      Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed:             Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Mitigation; Microcode
Vulnerability Tsx async abort:      Not affected

Versions of relevant libraries:
[pip3] numpy==1.23.0
[pip3] onnx==1.13.0
[pip3] onnx-simplifier==0.4.35
[pip3] onnxruntime==1.13.1
[pip3] torch==2.1.0
[pip3] torchmetrics==0.8.0
[pip3] torchvision==0.16.0
[pip3] triton==2.1.0

super-gradients==3.4.0
[conda] Could not collect
shaydeci commented 10 months ago

@Bananaspirit thanks for raising this issue. Actually, torchvision's Compose doesn't require the list of transforms to be of a specific type. However, we indeed were not clear that the segmentaiton transforms should inherit from SegmentationTransform, or at least obey the dictionary way of passing image and mask. And so, the argument wasn't meant for torchvision transforms that would be applied seperately. For example, its also the place where you would want to pass augmentations which have a random characteristics. If you were to apply these seperately we would get undesired behaviour, so the proposed solution is not a good fit here. You can, however use the sample_transform in order to apply transforms on the image only.

You should know a PR wihch takes out alot of this logic is in the works, with the goal of migrating to Albumentations transforms eventually .

Louis-Dupont commented 9 months ago

@Bananaspirit tldr; use one of the SegmentationTransform provided within SuperGradients, or alternatively use the Albumentation transforms like in this example: https://github.com/Deci-AI/super-gradients/blob/4c32a698f54945f60d5edb7395906735283f45a2/src/super_gradients/recipes/dataset_params/cityscapes_regseg48_dataset_params.yaml#L13-L17

I'm closing this Issue because there is no followup, feel free to reopen it if you have more questions