xmed-lab / DHC

MICCAI 2023: DHC: Dual-debiased Heterogeneous Co-training Framework for Class-imbalanced Semi-supervised Medical Image Segmentation
MIT License
50 stars 2 forks source link

about augmentation #8

Closed yinguanchun closed 5 months ago

yinguanchun commented 6 months ago

I find a error in your code of data augmentation. (5Q8$XE6(GF$8WS(NPFL1NH

As is shown in the picture,when you use the self._flip, you didn't get the return value, so the data augmentation is valid.

By comparing the images before and after data augmentation, it is found that the same number of voxels is always equal to the total number of elements. When the code is item=self._flip(item, prob) , the image before and after the data augmentation changes.

Does this mean that all the experiments in your paper, including your own, were the result of no data augmentation?

McGregorWwww commented 5 months ago

Hi, you are right, thanks for pointing out this bug. All the results could be further improved with this bug fixed. Sorry for the negligence.

yinguanchun commented 5 months ago

Hi, you are right, thanks for pointing out this bug. All the results could be further improved with this bug fixed. Sorry for the negligence. It's incomprehensible when I completely removed the previous faulty data augmentation code, the effect dropped a bit. When I corrected the code, the effect dropped dramatically.

yinguanchun commented 5 months ago

Hi, you are right, thanks for pointing out this bug. All the results could be further improved with this bug fixed. Sorry for the negligence. It's incomprehensible when I completely removed the previous faulty data augmentation code, the effect dropped a bit. When I corrected the code, the effect dropped dramatically.

When only use np.flip(image,dim=2),the model performance decreased significantly. When only use no.flip(image,dim=1),the model performance increased. When only use np.flip(image,dim=2), but don't get the return value, the model performance also increased. I don't understand why this happens. 111

McGregorWwww commented 5 months ago

Hi, thanks for the updating. Actually, the role of augmentation may not be that significant. The differences in the results could be primarily attributed to the unstable training process caused by the limited amount of data. It is worth noting that even when applying the same augmentations, the results still have large std (Real LR Fake UD).