I double-checked it several times, but DistributedSampler returns the same order of images in the same rank.
( DistributedSampler : shuffle=True , DataLoader : shuffle=False )
That is, since unsuperviced_train_loader_0 and unsuperviced_train_loader_1 use the same rank, the same image is called. (However, it is slightly different due to randomness of image augmentation)
Also, if you look at the shuffle condition code in the class DistributedSampler at the URL below, it is set to "g.manual_seed(self.seed + self.epoch)" .
( https://pytorch.org/docs/stable/_modules/torch/utils/data/distributed.html#DistributedSampler )
Therefore, each sampler is shuffled with "seed+epoch".
-> each sampler is shuffled with seed+epoch, and your seed=0 (default) so the same image file is used for each epoch. (if same rank)
I think both unsuperviced_train_loader_0 and unsuperviced_train_loader_1 seeds should be different to mix different images using your code.
( i.e., unsuperviced_train_loader_0(train_dataset,seed=10+engine.local_rank) , unsuperviced_train_loader_0(train_dataset,seed=20+engine.local_rank) )
I have checked several times, but unsuperviced_train_loader_0 and unsuperviced_train_loader_1 output the same image file within the same rank. Therefore, I think your code is mixed with the same image.
If I'm mistaken, I'd appreciate it if you let me know.
Thank you!
Hi, thanks for your nice insights. :)
So you mean if we set each unsupervised_train_loader's seed differently, then we can finally use fully shuffled images, right?
First of all, please understand that I am asking you a similar question.
in your code ( https://github.com/charlesCXK/TorchSemiSeg/blob/main/exp.voc/voc8.res50v3%2B.CPS%2BCutMix/train.py ) and my previous question, ( https://github.com/charlesCXK/TorchSemiSeg/issues/60 )
I double-checked it several times, but DistributedSampler returns the same order of images in the same rank. ( DistributedSampler : shuffle=True , DataLoader : shuffle=False ) That is, since unsuperviced_train_loader_0 and unsuperviced_train_loader_1 use the same rank, the same image is called. (However, it is slightly different due to randomness of image augmentation)
Also, if you look at the shuffle condition code in the class DistributedSampler at the URL below, it is set to "g.manual_seed(self.seed + self.epoch)" . ( https://pytorch.org/docs/stable/_modules/torch/utils/data/distributed.html#DistributedSampler ) Therefore, each sampler is shuffled with "seed+epoch". -> each sampler is shuffled with seed+epoch, and your seed=0 (default) so the same image file is used for each epoch. (if same rank)
I think both unsuperviced_train_loader_0 and unsuperviced_train_loader_1 seeds should be different to mix different images using your code. ( i.e., unsuperviced_train_loader_0(train_dataset,seed=10+engine.local_rank) , unsuperviced_train_loader_0(train_dataset,seed=20+engine.local_rank) )
Like other recent studies, (i.e., PSMT cut-mix : https://github.com/yyliu01/PS-MT/blob/main/VocCode/train.py#L83 or U2PL cut-mix : https://github.com/Haochen-Wang409/U2PL/blob/b818f01c9c11cf6ebfb9fe4d679d2901eefa3f3c/u2pl/dataset/augmentation.py#L498 ) I think each should be mixed within the batch image.
I have checked several times, but unsuperviced_train_loader_0 and unsuperviced_train_loader_1 output the same image file within the same rank. Therefore, I think your code is mixed with the same image.
If I'm mistaken, I'd appreciate it if you let me know. Thank you!