Closed yinguanchun closed 5 months ago
Hi, you are right, thanks for pointing out this bug. All the results could be further improved with this bug fixed. Sorry for the negligence.
Hi, you are right, thanks for pointing out this bug. All the results could be further improved with this bug fixed. Sorry for the negligence. It's incomprehensible when I completely removed the previous faulty data augmentation code, the effect dropped a bit. When I corrected the code, the effect dropped dramatically.
Hi, you are right, thanks for pointing out this bug. All the results could be further improved with this bug fixed. Sorry for the negligence. It's incomprehensible when I completely removed the previous faulty data augmentation code, the effect dropped a bit. When I corrected the code, the effect dropped dramatically.
When only use np.flip(image,dim=2),the model performance decreased significantly. When only use no.flip(image,dim=1),the model performance increased. When only use np.flip(image,dim=2), but don't get the return value, the model performance also increased. I don't understand why this happens.
Hi, thanks for the updating. Actually, the role of augmentation may not be that significant. The differences in the results could be primarily attributed to the unstable training process caused by the limited amount of data. It is worth noting that even when applying the same augmentations, the results still have large std (Real LR Fake UD).
I find a error in your code of data augmentation.
As is shown in the picture,when you use the self._flip, you didn't get the return value, so the data augmentation is valid.
By comparing the images before and after data augmentation, it is found that the same number of voxels is always equal to the total number of elements. When the code is item=self._flip(item, prob) , the image before and after the data augmentation changes.
Does this mean that all the experiments in your paper, including your own, were the result of no data augmentation?