Hi, I have a concern that you might have evaluated on different training data for CIFAR100/10-LT.
According to your code cifar100Imbanlance.py, you made an imbalanced data using a different method.
Other papers use Cao et al.'s method to generate CIFAR100/10-LT. (Fix np.random.seed(0), use np.random.shuffle() to select indices)
You used different seed (3407 in default argument) and method (using np.random.choice(replace=False) to select indices).
This means that your version of CIFAR100/10-LT is NOT the conventional dataset used in this field.
My suspicion became valid when I saw your reply on Issue #3.
Original method for generating CIFAR100/10-LT would not produce repeated indices on the first place.
This is a serious problem.
Please correct me if I'm wrong, and I would like to know how well your method works on the actual dataset used in this field.
Hi, I have a concern that you might have evaluated on different training data for CIFAR100/10-LT. According to your code
cifar100Imbanlance.py
, you made an imbalanced data using a different method. Other papers use Cao et al.'s method to generate CIFAR100/10-LT. (Fix np.random.seed(0), use np.random.shuffle() to select indices) You used different seed (3407 in default argument) and method (using np.random.choice(replace=False) to select indices). This means that your version of CIFAR100/10-LT is NOT the conventional dataset used in this field.My suspicion became valid when I saw your reply on Issue #3. Original method for generating CIFAR100/10-LT would not produce repeated indices on the first place.
This is a serious problem. Please correct me if I'm wrong, and I would like to know how well your method works on the actual dataset used in this field.