OatmealLiu / class-iNCD

PyTorch implementation for the paper Class-incremental Novel Class Discovery (ECCV 2022)
85 stars 10 forks source link

The effect of randomness #5

Closed zstarN70 closed 1 year ago

zstarN70 commented 2 years ago

Thanks to the code contributed for the subsequent research work, I conducted experiments and the effect of randomness on the results of the paper seems to be significant. Maybe my approach is unreasonable. So I want to get help.

To get faster data loading, I adjusted the code in each of these two files.

in the file cifarloader.py I changed the dataset to dataset_unlabeled. loader = data.DataLoader(dataset_unlabeled, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers, drop_last=True)

in the file incd_ablation_expt.py I have commented the following lines. mask_lb = label < args.num_labeled_classes x = x[~mask_lb] x_bar = x_bar[~mask_lb] label = label[~mask_lb]

But the adjusted old: 66.8, new: 34.4 on Cifar100 dataset. I was surprised by this result. I don't know what I should do to eliminate such random effects.

Does this mean I should use a smaller batch to get a larger number of iterations?